text
string
source
string
functions [ 42], which is defined as a ratio of two tropical polynomials, themselves formed by maxima of affine functions. Closely related to this is the intuitive geometric understanding of consecutive layers of ReLU networks as space folding transformations that allow a compact representation of similarities in the input space [20, 28, 29]. In ReLU networks, the negative pre-activations are silenced with true zeros, which allow sparse representations. This might lead to better generalization on unseen data, but forcing too much sparsity may damage the prediction as it effectively reduces the model capacity [ 7]. This phenomenon, known as “the dying ReLU problem”, has been a big caveat of ReLU networks. The dying ReLU problem led to a plethora of linear unit functions including, but not limited to LeakyReLU [ 26], PReLU [9], GELU [ 11], SELU [ 17], SiLU/Swish [ 4,31], ELU [ 2]. All these functions introduce non-zero activations for negative pre-activation values, offering different trade-offs between ReLU’s modeling benefits and the advantages of smooth, continuous gradients. In this paper, we address the limitations of ReLU without sacrificing its advantages by introducing a novel approach: the application of a Surrogate Gradient for ReLU (SUGAR). SUGAR allows models to retain standard ReLU activations, while ensuring stable gradient flow even for negative pre-activations. Further contributions of this work are as follows: •We propose two new surrogate gradient functions, B-SiLU and NeLU, which integrate seamlessly into a variety of models. These functions consistently improve generalization performance. •We conduct comprehensive experiments with VGG-16 [ 35] and ResNet-18 [ 8], demonstrat- ing that SUGAR significantly enhances generalization in both architectures. •We evaluate SUGAR on modern architectures such as Swin Transformer [ 23] and Conv2NeXt [5], showcasing its adaptability and effectiveness. •An in-depth analysis of VGG-16 layer activations reveals a distinct shift in activation distributions when SUGAR is applied, providing visual evidence for its role in mitigating the dying ReLU problem, while at the same time more sparse representations are fostered. •We further explore the loss landscape with and without SUGAR, offering deeper insight into its optimization benefits. The proposed SUGAR method offers several desirable properties. It is simple to implement and consistently utilizes ReLU in the forward pass. When combined with the proposed B-SiLU surrogate function, VGG-16 achieves improvements of 10 and 16 percentage points in test accuracy on CIFAR-10 and CIFAR-100 [ 18], respectively, while ResNet-18 shows corresponding gains of 9 and 7 percentage points compared to the best-performing models without SUGAR. 2 Background 2.1 Surrogate gradient learning In conventional artificial neural networks, learning relies on gradient-based optimization methods such as backpropagation, which require continuous, differentiable activation functions. However, spiking neural networks (SNNs) are discrete and non-differentiable, making direct application of backpropagation infeasible. 2 Surrogate gradient learning emerged as a solution to train SNNs by replacing the gradient of the non-differentiable spiking function with a smooth and differentiable approximation [ 30]. These surrogate functions allow gradients to flow through the network during training, enabling the use of powerful optimization techniques in neuromorphic and event-based computing settings. This idea gained traction in the late-2010s and
https://arxiv.org/abs/2505.22074v1
has since become a cornerstone for training biologically inspired spiking models in a computationally efficient manner [1, 12, 38, 41]. Recently, a related approach called ProxyGrad [ 22] improved activation maximization (AM) in convolutional networks by manipulating gradients during optimization. The study showed that using LeakyReLU in the backward pass, while keeping ReLU in the forward pass, enabled AM to escape poor local optima and reach higher activation values. As a result, the method produced more informative and interpretable feature visualizations. 2.2 Forward gradient injection (FGI) Forward Gradient Injection (FGI) is the backbone algorithm in SUGAR. It was introduced in [ 34] as a surrogate gradient strategy in SNNs. It exploits the stop gradient operator (i.e. .detach() ) to manipulate the gradients such that a model with non-differentiable spikes becomes trainable via gradient signals. FGI enables a gradient injection during forward pass with the following equation (indirect surrogate gradient function): y=g(x)−sg(g(x)) + sg( f(x)) (1) where sg(·)is the stop gradient operator, f(·)is the forward computation with its gradient bypassed andg(·)is another function where the gradient is instead injected over the variable of interest xwhile not contributing to the forward result due subtraction with itself. Choosing g(·)as an activation function with non-zero gradients for negative inputs, ReLU networks can be trained without suffering from the dying ReLU problem. However, Equation 1 requires the gradient computation of g(·)in the backward pass. Following multiplication trick allows ˜g(·)to be exactly the derivative of f(·)in the backward pass (direct surrogate gradient function): m=x·sg(˜g(x)) (2) y=m−sg(m) + sg( f(x)) (3) FGI enables the injection of surrogate gradient functions directly in the forward pass, independent of the original function f(·). The results in [ 34] suggest that FGI can improve both model optimizability and exportability over classical deployments of surrogate gradients (i.e. as overriding the backward function). 3 Surrogate gradient for ReLU (SUGAR) Our proposed method applies FGI in ReLU networks with smooth surrogate functions. We extend the potential applications of surrogate gradients beyond SNNs, and aim to introduce this framework as a technique to overcome disadvantages of ordinary ReLUs. Indirect FGI within the context of SUGAR can be expressed as: y=f(x)−sg(f(x)) + sg(ReLU( x)) (4) This formulation enables gradient injection and ensures gradient propagation even for negative activations. Specifically, using the multiplication trick from [ 34], the direct injection of the surrogate gradient function is accomplished via: m=x·sg(˜f(x)) (5) y=m−sg(m) + sg(ReLU( x)) (6) Here, ˜f(x)explicitly defines the surrogate gradient behavior for ReLU. The choice of the surrogate function is flexible and can include activation functions commonly employed in state-of-the-art applications, such as ELU, GELU, SiLU, SELU, and Leaky ReLU (see Figure 8). These functions typically possess desirable properties motivated by mechanisms like self-gating or self-normalization. Importantly, these surrogate candidates share the characteristic of 3 having non-zero gradients for negative inputs ( x <0), unlike ReLU. Although surrogate functions enable gradient flow for negative activations, the forward pass and subsequent loss computation strictly depend on activations for x >0. Consequently, the effect of surrogate gradient learning in this setting can be interpreted as filtering out pre-activations below the
https://arxiv.org/abs/2505.22074v1
cut-off threshold, reducing the network’s propensity to overfit due to topological simplification and sparsity [ 27,29], but without harming the gradient flow. In preliminary studies, we realized the need for adapting the current activation functions to utilize the specific purpose of SUGAR. As it will be discussed in depth later, SUGAR’s effect varies in different regularization settings. Therefore, in the following, we propose two new surrogate functions that align well with these settings. 3.1 B-SiLU We introduce a novel activation function named Bounded Sigmoid Linear Unit (B-SiLU), which combines self-gating characteristics with a tunable lower bound parameter. Mathematically, this function can be expressed as: B-SiLU (x) = (x+α)·σ(x)−α 2,withα= 1.67 (7) where σ(x)denotes the sigmoid activation. The derivative of the B-SiLU activation function is given by: d dxB-SiLU (x) =σ(x) + (x+α)σ(x)(1−σ(x)) (8) Both B-SiLU and its derivative are visualized in Figure 8. The B-SiLU activation function emerged from exploratory experiments with SUGAR, drawing particular inspiration from the swish-like sigmoidal function introduced in [ 33] and the related thresholding operator examined in [ 15,40]. It is motivated by the goal of combining advantageous properties from the self-gating behavior of SiLU and the smoothness of GELU. 3.2 NeLU We further introduce Negative slope Linear Unit (NeLU), as a smooth derivative substitute for ReLU. It is inspired by the constant derivative of ReLU for x >0and a smooth negative slope from GELU forx <0. For large negative inputs, the resulting gradient converges back to zero. d dxNeLU (x) =  1, ifx >0 α2x (1 +x2)2,else(9) αcontrols the magnitude of the negative gradient for small values of x <0, which ensures stability. The resulting gradient is shown in Figure 1. By applying the multiplication trick in Equation 5 and Equation 6, it is possible to directly set the gradient of the activation as in Equation 9. −2 0 2−10123ReLU −2 0 2SUGAR (B-SiLU) −2 0 2SUGAR (NeLU, α=0.2) Figure 1: Comparison of activation functions and their derivatives. From left to right: ReLU (dashed) and its derivative (black), ReLU activation function with B-SiLU derivative (blue) and ReLU activation function with NeLU derivative (red). 4 4 Experiments 4.1 Surrogate function comparison on CIFAR-10/100 We conducted extensive experiments to evaluate and compare various activation functions, both with and without SUGAR, on the CIFAR-10 and CIFAR-100 datasets using ResNet-18 and VGG-16 architectures. In every run where SUGAR is applied, the forward function is always the standard ReLU; for a given surrogate activation f(e.g. ELU), we compare the performance of the network using fin the forward pass and it’s true gradient in the backward pass (non-SUGAR scenario) against the identical architecture using ReLU for the forward pass but f’s gradient for the backward pass (e.g. SUGAR with ELU). Each configuration was trained five times with different random seeds to ensure robustness of the results. To isolate the generalization effects of the activation functions and the application of SUGAR we did not apply any data augmentation (see Appendix B for the full experimental setup). The set of surrogate functions evaluated included LeakyReLU, SELU, ELU, GELU, SiLU (Swish), Mish, and our
https://arxiv.org/abs/2505.22074v1
proposed B-SiLU and NeLU. See Appendix C for the complete experimental results. 0 20 40 60 80 100 Epoch2.02.53.03.54.0Validation lossnon-SUGAR 0 20 40 60 80 100 EpochSUGAR ELU (52.4±0.4%) GELU (44.3±0.5%) Leaky ReLU (43.4±0.3%) Mish (48.4±0.3%)NeLU (α=0.01) (43.2±0.4%) NeLU (α=0.05) (42.4±0.4%) NeLU (α=0.1) (41.9±0.2%) B-SiLU (56.5±0.5%)ReLU (43.4±0.6%) SELU (52.1±0.6%) Swish (44.1±0.3%) Figure 2: The plots show the validation loss of ResNet-18 on CIFAR-100 with and without SUGAR. In the legend, the corresponding test accuracies (of the respective functions as surrogates) are given for completeness. See Section E.1 for all the convergence plots from the experiments. Overall, SUGAR with ELU, SELU, and especially B-SiLU delivered the largest gains over the ReLU baseline, whereas LeakyReLU and NeLU consistently underperformed (Figure 2). On CIFAR-10 with a ResNet-18 backbone, B-SiLU rose from 76.76 % to 86.42 % with SUGAR. VGG-16 showed a similar performance: B-SiLU improved the testing accuracy by almost 10 points (78.50 % →88.35 %). On CIFAR-100, the superiority of SUGAR B-SiLU was even more pronounced: ResNet-18’s accuracy jumped from 48.99 % to 56.51 %, and VGG-16’s from 48.73 % to 64.47 % (Figure 3). Again, Leaky ReLU and NeLU showed negligible or negative gains (e.g. 43.67 % →43.41 % on ResNet-18), indicating that simple linear leakage fails to harness the benefits of SUGAR under this setup. In summary, B-SiLU outperforms all other surrogates across architectures and datasets, ELU and SELU provide reliable improvements, and SUGAR does not benefit meaningfully from Leaky ReLU and NeLU in this setting. 4.2 Stability improvements for deep ReLU networks To evaluate SUGAR’s effectiveness in addressing the dying ReLU problem, we revisited a controlled setting introduced in [ 25], where a deep and narrow ReLU network with symmetric weight initial- ization fails to learn due to widespread neuron inactivity. We replicated and extended the original experiments by incorporating SUGAR with B-SiLU surrogate gradients, which greatly improved 5 B-SiLUSELUELU Mish Leaky ReLUGELU ReLU NeLU (α=0.01) NeLU (α=0.05)NeLU (α=0.1)Swish0102030405060Test accuracy (%) 48.7% 64.5% 48.3% 61.2% 48.8% 58.1% 45.0% 47.7% 43.7% 44.1% 42.1% 43.5% 43.5% 42.9% 43.2% 41.6% 42.9% 40.4% 42.2% 44.4% 39.6%VGG-16 – CIFAR-100 Accuracy Baseline SUGARFigure 3: Test accuracy of VGG-16 on CIFAR-100, comparing non-SUGAR (red) and SUGAR (blue) for each activation function. The black bar represents the baseline, where the model is simply trained with ReLU (forward and backward). See Section E.3 for all the accuracy plots from the experiments. layer activation probabilities and learning outcomes across multiple simple regression tasks. Our results demonstrate that SUGAR enables gradient flow even through inactive neurons, reducing collapse rates and enhancing model expressivity. For an in-depth analysis please refer to Appendix A. 4.3 Conv2NeXt and Swin Transformer We further investigate SUGAR’s potential in state-of-the-art models. In this regard, we have chosen one convolution and one attention based model. Conv2NeXt [ 5] was chosen as a convolutional model because it is the same architecture as ConvNeXt [ 24] but adapted for smaller datasets. Swin Transformer [ 23] is, on the other hand, a vision transformer model. It is developed for large datasets (i.e. ImageNet 1k) and the smallest model has 28M parameters. As
https://arxiv.org/abs/2505.22074v1
we have trained the model on Tiny-ImageNet 200[3], it is considerably over-parameterized for the dataset. The models were adopted in their original form from [ 23] and [ 5]. The only modification applied was changing the activation function with the corresponding SUGAR implementation. When applied to Conv2NeXt, SUGAR consistently outperforms the base models that use GELU in both forward and backward pass, as shown in Table 1. Although our reproduction of the Conv2Next results from [ 5] stays lower than the reported 83.84% accuracy with GELU, SUGAR with NeLU exceeds this value. In the case of the Swin Transformer, despite a slight drop in performance for B-SiLU, NeLU with α= 0.01yields a higher accuracy than both base models. Table 1: Top- 1accuracy ( %) of Conv2NeXt and Swin Transformer models, trained on CIFAR- 100 and Tiny-ImageNet 200respectively. Base models do not apply SUGAR. For both models, the original works apply GELU for the activation. Our proposed equations use ReLU in forward pass. NeLU is reported for different αvalues. Best models are highlighted. Models ReLU(base) GELU(base) B-SiLU NeLU( 0.01) NeLU( 0.05)NeLU( 0.1) Conv2NeXt 83.85±0.2 83 .74±0.2 83 .87±0.2 83 .84±0.1 83 .84±0.383.95±0.2 Swin Transformer 61.25±0.1 61 .39±0.5 57 .15±0.261.48±0.361.24±0.3 61 .21±0.2 6 5 Discussion In the subsequent analysis, we examine the SUGAR effect by investigating the layer activations for each sample. The results show a clear shift in the activation distribution. Afterwards, the loss landscape of ResNet-18 is analyzed with and without SUGAR. Finally, we explore the potential of treating SUGAR as a form of regularization. 5.1 SUGAR revives dead neurons To shed light on how surrogate gradients affect internal representations, we analyzed the activation profiles of a VGG-16 backbone after 40 training epochs on CIFAR -100 (see Figure 4). For each data sample, we recorded how many times a neuron produced an output over the course of an entire training epoch such that the resulting distribution reflects how frequently neurons effectively participate in the forward pass. The frequency at point 0 on the x-axis corresponds to the neurons that are inactive over all samples (i.e. dead neurons). 0 10000 20000 30000 40000 Activation count02468101214 Layer index0.000.020.040.060.080.100.120.14 Norm. frequency (a) ReLU 0 10000 20000 30000 40000 Activation count02468101214 Layer index0.000.020.040.060.080.100.120.140.16 Norm. frequency (b) SUGAR (B-SiLU) Figure 4: Activation profiles of VGG-16 trained on CIFAR-100 after 40 epochs. The x-axis shows the activation count per data sample, the y-axis indicates the layer index (with the final layer being fully connected), and the z-axis shows the normalized frequency, allowing for comparison across layers and activation functions. A first striking difference between the vanilla ReLU baseline and the network trained with SUGAR (B- SiLU) appears in layers 12 and 13. The ReLU model shows a flat distribution in the activation -count histogram: a sizable group of neurons never fire (dead neurons) while others remain permanently active. In the SUGAR model, the same layers follow an approximately normal distribution centered on moderate activation counts, indicating that bounded surrogate gradients keep gradient flow alive and prevent neurons from becoming functionally inert. A second difference concerns the shallow part
https://arxiv.org/abs/2505.22074v1
of the network. The first four convolutional layers of the SUGAR model exhibit slightly flatter, right -skewed distributions whose mode is lower than in the baseline. Hence on average, fewer filters are active for a given image, suggesting that the surrogate -optimized activation encourages selectivity and reduces redundant feature maps, potentially improving generalization. Taken together, these observations indicate that bounded surrogate gradients simultaneously mit- igate the dead -neuron problem and promote sparsity where it is most beneficial. The resulting balance—wide early sparsity combined with well -behaved deep activations—may contribute to the improved generalization reported in Section 4. The reduced average activation rate also hints at tangible gains for deployment on resource -constrained hardware, where memory traffic and multiply-accumulate counts scale with the number of active neurons. 7 The present analysis is limited to layer -wise aggregate statistics. Future work should track the temporal dynamics of activations during training and evaluate whether the same trends generalize to other architectures and datasets. We observed the same activation -profile patterns on CIFAR -10 across all tested models. The corresponding plots can be found in Section E.2. 5.2 Loss surface analysis To understand how surrogate gradients reshape the optimization geometry, we visualized the loss landscape in the neighborhood of the trained weights after 10 epochs of training a ResNet -18 on CIFAR -100 (see Figure 5). Following the standard two -direction procedure in [ 21], we sampled two random directions, rescaled layer -wise to match the ℓ2-norm of the corresponding weight tensors (batch -norm parameters frozen). We evaluated the loss on a 100×kgrid spanning [−0.25,0.25]2 and plotted the resulting contours for the vanilla ReLU model and for the same model trained with SUGAR (B-SiLU). −0.2 −0.1 0.0 0.1 0.2 α−0.2−0.10.00.10.2βReLU −0.2 −0.1 0.0 0.1 0.2 α−0.2−0.10.00.10.2βSUGAR (B-SiLU) Figure 5: Loss landscapes visualized around the trained solution using different gradient flows. SUGAR (B-SiLU) smooths the optimization surface while retaining the ReLU forward pass, leading to a more stable geometry. The landscape of the vanilla network exhibits a relatively flat basin at the center but rises steeply toward the edges. At extreme weight perturbations—the corners of the grid—the loss exceeds 25, creating sharp cliffs that can impede optimization. In contrast, the SUGAR landscape is markedly more convex and the loss remains low even at large perturbations. The smoother surface implies better -conditioned gradients and helps explain the faster convergence we observe during training in Figure 2 and Section E.1. 5.3 SUGAR as a regularization technique The empirical results in Section 5.1 present evidence for a distribution shift in the particular layers. These distributions display sparse activities induced by the ReLU activation function. There is a substantial body of work exploring the relationship between sparsity andgeneralization [37,6]. We aim to examine SUGAR from the perspective of regularization. The most widespread and simple regularization is weight decay, which modulates the gradient of the pure cost function towards small weights. SUGAR also regularizes by modulating the gradient of the pure ReLU-based cost function; however, by employing surrogate activation functions, it does so in a more sophisticated and adaptive manner, depending on
https://arxiv.org/abs/2505.22074v1
the activation patterns. We consider the investigated models in this work. The results in Section 4.1 and Section 5.1 use models that are not heavily regularized. In this setting, choosing a surrogate function that deviates considerably from ReLU derivative induces harsher regularization and improves the predictive perfor- 8 mance. However, in a highly regularized setting such as in Section 4.3, additional regularization has to be applied carefully and nuanced to avoid underfitting. In such a scenario, it is advisable to choose a function (e.g. NeLU) that deviates only slightly from the ReLU’s derivative providing gradient flow for pre-activation below the cut-off threshold while modeling the backward characteristics of ReLU as closely as possible. This behavior is reflected in our results: while B-SiLU leads to substantial improvements in generalization for VGG-16 and ResNet-18 (see Section 4.1), NeLU proves to be more effective in enhancing generalization in the already regularized Conv2NeXt and Swin Transformer models (see Section 4.3). 6 Conclusion This work provides compelling evidence that surrogate gradient learning, originally applied in the spiking neural network domain, can significantly benefit the classical ReLUs in non-spiking deep neural networks. By preserving ReLU in the forward pass while substituting its derivative with a smooth surrogate function during backpropagation, SUGAR enables robust training dynamics and improved generalization, especially in convolutional architectures like VGG-16 and ResNet-18. Our findings suggest that SUGAR, combined with carefully designed surrogate functions such as B-SiLU and NeLU, offers an elegant solution to the long-standing dying ReLU problem. B- SiLU, in particular, introduces bounded smooth gradients that not only prevent neuron inactivity but also encourage beneficial sparsity patterns. NeLU, on the other hand, offers a more conservative regularization effect through its smooth negative slope, preserving ReLU’s structural simplicity and beneficial properties while improving gradient flow for suppressed activations. Although the exact influence of NeLU’s negative slope on training dynamics remains an open question, our experiments indicate that it contributes to gradient propagation in strongly regularized models like Conv2NeXt and Swin Transformer. This suggests a nuanced interaction between surrogate gradient shape and model regularization strength, pointing to promising avenues for future exploration. In conclusion, this work repositions classical ReLU not as a relic, but as a resilient component in the deep learning toolbox. With appropriate gradient handling, ReLU-based networks can match or even outperform modern architectures that rely on more complex activations. 6.1 Limitations and future work SUGAR’s performance varies notably across different model families. It shows clear benefits in deep, less-regularized networks like VGG-16 and ResNet-18 but is less effective, or even detrimental, in highly regularized architectures such as Conv2NeXt and Swin Transformer, if the surrogate gradient strongly deviates from the ReLU derivative. The surrogate functions introduced were crafted through empirical intuition and trial-based tuning rather than grounded in formal design principles. Moreover, our results suggest improvements in training dynamics and generalization, but the study does not yet offer formal guarantees on convergence, stability, or generalization bounds. Without a rigorous analytical framework, it is difficult to predict SUGAR’s behavior across different training regimes. Our evaluation focuses primarily on image classification tasks using datasets like
https://arxiv.org/abs/2505.22074v1
CIFAR-10, CIFAR- 100, and Tiny-ImageNet, in addition to few toy problems. It is uncertain how SUGAR would perform in other domains such as natural language processing, reinforcement learning, or time series modeling, where activation dynamics and gradient propagation can differ substantially. Beyond addressing the current limitations and open questions, future research may assess automatic surrogate function search tailored for specific architectures and datasets. It may also be interesting to consider dynamic surrogates that adapt based on training signals and activation distributions, or in a scheduled manner. Moreover, given that SUGAR improves sparsity and reduces the activation profiles within the network while applying just simple ReLU, it may prove beneficial for structured pruning, quantization-aware training, or energy-efficient inference in the context of low-footprint models. 9 References [1]Guillaume Bellec, Darjan Salaj, Anand Subramoney, Robert Legenstein, and Wolfgang Maass. Long short-term memory and learning-to-learn in networks of spiking neurons. Advances in neural information processing systems , 31, 2018. [2]Djork-Arné Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning by exponential linear units (elus). In International Conference on Learning Representations (ICLR) , 2016. [3]Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition , pages 248–255. Ieee, 2009. [4]Stefan Elfwing, Eiji Uchibe, and Kenji Doya. Sigmoid-weighted linear units for neural network function approximation in reinforcement learning. Neural Networks , 107:3–11, 2018. Special issue on deep reinforcement learning. [5]Jianwei Feng, Hengliang Tan, Wangwang Li, and Ming Xie. Conv2next: Reconsidering conv next network design for image recognition. In 2022 International Conference on Computers and Artificial Intelligence Technologies (CAIT) , pages 53–60, 2022. [6]Trevor Gale, Erich Elsen, and Sara Hooker. The state of sparsity in deep neural networks. arXiv preprint arXiv:1902.09574 , 2019. [7]Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep sparse rectifier neural networks. In Geoffrey Gordon, David Dunson, and Miroslav Dudík, editors, Proceedings of the Fourteenth International Confer- ence on Artificial Intelligence and Statistics , volume 15 of Proceedings of Machine Learning Research , pages 315–323, Fort Lauderdale, FL, USA, 11–13 Apr 2011. PMLR. [8]Kaiming He, X. Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , pages 770–778, 2015. [9]Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In 2015 IEEE International Conference on Computer Vision (ICCV) , pages 1026–1034, 2015. [10] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. InProceedings of the IEEE conference on computer vision and pattern recognition , pages 770–778, 2016. [11] Dan Hendrycks and Kevin Gimpel. Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415 , 2016. [12] Saya Higuchi, Sebastian Kairat, Sander Bohté, and Sebastian Otte. Balanced resonate-and-fire neurons. In Proceedings of the 41st International Conference on Machine Learning , volume 235 of Proceedings of Machine Learning Research , pages 18305–18323. PMLR, 21–27 Jul 2024. [13] Andrew Jesson, Chris Lu, Gunshi Gupta, Nicolas Beltran-Velez, Angelos Filos, Jakob N. Foerster,
https://arxiv.org/abs/2505.22074v1
and Yarin Gal. Relu to the rescue: improve your on-policy actor-critic with positive advantages. In Proceedings of the 41st International Conference on Machine Learning , ICML’24. JMLR.org, 2024. [14] Nandan Kumar Jha and Brandon Reagen. Relu’s revival: On the entropic overload in normalization-free large language models. 2nd Workshop on Attributing Model Behavior at Scale (NeurIPS) , 2024. [15] Geoffrey Kasenbacher, Felix Ehret, Gerrit Ecke, and Sebastian Otte. Warp-lca: Efficient convolutional sparse coding with locally competitive algorithm. Neurocomputing , page 130291, 2025. [16] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 , 2014. [17] Günter Klambauer, Thomas Unterthiner, Andreas Mayr, and Sepp Hochreiter. Self-normalizing neural networks. In Advances in neural information processing systems (NIPS) , volume 30, 2017. [18] Alex Krizhevsky. Learning multiple layers of features from tiny images. Master’s thesis, Department of Computer Science, University of Toronto, 2009. [19] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In F. Pereira, C.J. Burges, L. Bottou, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems , volume 25. Curran Associates, Inc., 2012. 10 [20] Michal Lewandowski, Hamid Eghbalzadeh, Bernhard Heinzl, Raphael Pisoni, and Bernhard A. Moser. On space folds of reLU neural networks. Transactions on Machine Learning Research , 2025. [21] Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer, and Tom Goldstein. Visualizing the loss landscape of neural nets. Advances in neural information processing systems , 31, 2018. [22] Christoph Linse, Erhardt Barth, and Thomas Martinetz. Leaky relus that differ in forward and backward pass facilitate activation maximization in deep neural networks. In 2024 International Joint Conference on Neural Networks (IJCNN) , pages 1–8, 2024. [23] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. 2021 IEEE/CVF International Conference on Computer Vision (ICCV) , pages 9992–10002, 2021. [24] Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, and Saining Xie. A convnet for the 2020s. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages 11966–11976, 2022. [25] Lu Lu, Yeonjong Shin, Yanhui Su, and George Em Karniadakis. Dying relu and initialization: Theory and numerical examples. Communications in Computational Physics , 28(5):1671–1706, January 2020. [26] Andrew L Maas, Awni Y Hannun, and Andrew Y Ng. Rectifier nonlinearities improve neural network acoustic models. In Proc. ICML workshop track , volume 30, 2013. [27] Dushyant Mehta, Kwang In Kim, and Christian Theobalt. On implicit filter level sparsity in convolutional neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages 520–528, 2019. [28] Guido Montúfar, Razvan Pascanu, Kyunghyun Cho, and Yoshua Bengio. On the number of linear regions of deep neural networks. In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 2 , NIPS’14, page 2924–2932, Cambridge, MA, USA, 2014. MIT Press. [29] Gregory Naitzat, Andrey Zhitnikov, and Lek-Heng Lim. Topology of deep neural networks. J. Mach. Learn. Res. , 21(1), January 2020. [30] Emre O. Neftci, Hesham Mostafa, and
https://arxiv.org/abs/2505.22074v1
Friedemann Zenke. Surrogate gradient learning in spiking neural networks: Bringing the power of gradient-based optimization to spiking neural networks. IEEE Signal Processing Magazine , 36:51–63, 2019. [31] Prajit Ramachandran, Barret Zoph, and Quoc V Le. Searching for activation functions. In 6th International Conference on Learning Representations, ICLR 2018, Workshop Track Proceedings , 2018. [32] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation, 2015. [33] Christopher J Rozell, Don H Johnson, Richard G Baraniuk, and Bruno A Olshausen. Sparse coding via thresholding and local competition in neural circuits. Neural computation , 20(10):2526–2563, 2008. [34] Otte Sebastian. Flexible and efficient surrogate gradient modeling with forward gradient injection. In The First Austrian Symposium on AI, Robotics, and Vision 2024 (AIROV24) , pages 451–459, 2024. [35] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recogni- tion. arXiv preprint arXiv:1409.1556 , 2014. [36] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems , 30, 2017. [37] Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. Learning structured sparsity in deep neural networks. Advances in neural information processing systems , 29, 2016. [38] Bojian Yin, Federico Corradi, and Sander M Bohté. Accurate and efficient time-domain classification with adaptive spiking recurrent neural networks. Nature Machine Intelligence , 3(10):905–913, 2021. [39] M.D. Zeiler, M. Ranzato, R. Monga, M. Mao, K. Yang, Q.V . Le, P. Nguyen, A. Senior, V . Vanhoucke, J. Dean, and G.E. Hinton. On rectified linear units for speech processing. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing , pages 3517–3521, 2013. [40] Jinshan Zeng, Shaobo Lin, Yao Wang, and Zongben Xu. l_{1/2} regularization: Convergence of iterative half thresholding algorithm. IEEE Transactions on Signal Processing , 62(9):2317–2329, 2014. 11 [41] Friedemann Zenke and Tim P V ogels. The remarkable robustness of surrogate gradient learning for instilling complex function in spiking neural networks. Neural computation , 33(4):899–925, 2021. [42] Liwen Zhang, Gregory Naitzat, and Lek-Heng Lim. Tropical geometry of deep neural networks. In Jennifer G. Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018 , volume 80 of Proceedings of Machine Learning Research , pages 5819–5827. PMLR, 2018. 12 A A toy example for solving the dying ReLU problem with SUGAR In [25], a deliberate setting for dying ReLUs was created. Under the given configuration, a 10-layered ReLU network produced constant output due to the dying ReLU problem. The proposed solution is Randomized Asymmetric Initialization (RAI) of the weights such that there are fewer negative activations. In this section, we replicate and augment these experiments with SUGAR and show that SUGAR is able to dramatically reduce the probability of dead activations. The network in question consists of 10feedforward hidden layers with a width of 2, which only employs ReLU activations. As shown in [ 25], when a neural network is sufficiently deep relative to its width and initialized with symmetric
https://arxiv.org/abs/2505.22074v1
weights, the initial activation probabilities tend to be near zero. In that case, in more than 90% of trials, the network fails to learn meaningful representations due to widespread irrevocable neuron inactivity. To conduct the evaluation, four distinct toy datasets in the range of [−1.5,1.5]were adapted from [25] as regression tasks. For each function, we drew 3000 samples from U[−√ 3,√ 3]din, used as input for the network. dindenotes the input dimension which is 2for Equation 13 and 1for the rest. For each task, 100independent runs were carried out. The corresponding equations are as follows: f1(x) =|x| (10) f2(x) =x·sin(5x) (11) f3(x) = 1{x>0}(x) + 0.2 sin(5 x) (12) f4(x1, x2) = |x1+x2| |x1−x2| (13) The loss was calculated as the mean squared error over 250epochs. Adam [ 16] was used as an optimizer with a learning rate of 0.005and being decreased by 0.1in epochs 100,150,200,225. The minibatch size is 64for every target function. As surrogate gradient in this experiment, the derivation of B-SiLU is used. Figure 6 illustrates clearly that the ReLU network fails to solve the regression task on Equation 10 as has already been shown in [ 25]. Once SUGAR with B-SiLU is applied, the network produces much more activation and is able to solve the regression task. −1.5 −1.0 −0.5 0.0 0.5 1.0 1.50.000.250.500.751.001.251.501.75ReLU −1.5 −1.0 −0.5 0.0 0.5 1.0 1.5SUGAR (B-SiLU) Ground Truth Prediction Figure 6: Exemplary comparison of predictions. Left plot shows the prediction of the plain ReLU network whereas SUGAR with B-SiLU is applied on the right. In addition, we tracked the mean layer activation probabilities, as shown in Figure 7, in order to monitor activity within each layer and estimate the number of neurons actively contributing during the forward pass. As a result, we obtained a dynamic view of the network’s internal activity, allowing us to assess whether the network utilized its capacity effectively and how sparsity evolved during 13 learning. A model with low activation in early layers might struggle to propagate information, while excessively high activation in later layers may indicate redundancy. Nonetheless, for the sake of clarity, we only visualize the average activities across all layers. 030 60 90120 150 180 210 240 Epoch0.40.50.60.7P > 0Target: f1(x) 030 60 90120 150 180 210 240 Epoch0.400.450.500.550.600.65P > 0Target: f2(x) 030 60 90120 150 180 210 240 Epoch0.400.450.500.55P > 0Target: f3(x) 030 60 90120 150 180 210 240 Epoch0.50.60.7P > 0Target: f4(x) ReLU SUGAR (B-SiLU) Figure 7: Layer activation probabilities averaged over all runs and layers for each method and task. The plots indicate that SUGAR with B-SiLU increase the activations over all layers during training while the use of standard ReLU only leads to a loss or stagnation of layer activity. In [25], the number of layers for Equation 13 is increased to 20from 10due to the complexity of the equation. In SUGAR experiments, the layer number is kept at 10and the model still generated more activation across layers compared to the results given in [ 25], as shown in Table 2. Using asymmetric positive initialization method, as suggested in [25], reduces
https://arxiv.org/abs/2505.22074v1
the possibility of the neurons becoming inactive. However, once a neuron is inactive, there is no possibility for the model to reactivate the neuron due to the fact that the proposed method only concerns the initialization. For SUGAR, it is still possible as a results of the enabled gradient flow through that particular neuron. 14 Table 2: Empirical probabilities in %from 100 independent simulations of the network to collapse and learn nothing (A), to learn features to some degree (B) and to successfully approximate the target function (C) as indicated in [ 25]. Results annotated with * are taken form [ 25] which conducted 1.000 independent simulations. For the last three target functions, [ 25] categorized the results only into (A) and not collapsed (D) which is also adapted here. Best models are highlighted. The respective loss ranges are decomposed in Table 3. Target function fiMethod (A) (B) (C) (D) Symmetric (He init)* 93.6 4.2 2.2 - f1(x) RAI* 40.3 22.4 37.3 - B-SiLU 15.0 8.0 77.0 - Symmetric (He init)* 91.9 - - 8.1 f2(x) RAI* 29.2 - - 70.8 B-SiLU 9.0 - - 91.0 Symmetric (He init)* 93.8 - - 6.2 f3(x) RAI* 32.6 - - 67.4 B-SiLU 2.0 - - 98.0 Symmetric (He init)* 76.8 - - 23.2 f4(x1, x2) RAI* 9.6 - - 90.4 B-SiLU 8.0 - - 92.0 Table 3: Each run was assigned to a category by dividing the resulting loss histogram into the respective number of sections. This table provides the loss ranges which were assigned to each category. Note that no loss was observed outside the reported ranges. Target function fi Category Loss range f1(x) (A) 15.5818≤ L ≤ 15.7392 (B) 10.5453≤ L ≤ 11.0175 (C) 0.0003≤ L ≤ 0.6299 (D) - f2(x) (A) 36.2548≤ L ≤ 36.6155 (B) - (C) - (D) 0.5452≤ L ≤ 27.2373 f3(x) (A) 19.2409≤ L ≤ 19.4284 (B) - (C) - (D) 0.6767≤ L ≤ 1.4268 f4(x) (A) 81.0379≤ L ≤ 81.5630 (B) - (C) - (D) 29.0563≤ L ≤ 64.7609 15 B Experimental setup for VGG-16 and ResNet-18 All activation-function experiments on CIFAR-10 and CIFAR-100 were run with identical settings: •Batch size: 128 •Validation split: 10% of the training set •Data augmentation: None •Number of workers: 4 •Optimizer: SGD with learning rate 0.001 •LR schedule: single milestone at epoch 100 •Epochs: 50 for CIFAR-10, 100 for CIFAR-100 •Repetitions: 5 independent runs per configuration with seeds [1,10,20,25,42] •Hardware: NVIDIA RTX A6000 GPUs (CUDA 12.6, Driver 560.35.03) C Complete results for VGG-16 and ResNet-18 on CIFAR-10/100 In this section the full results of VGG-16 and ResNet-18 are provided. Strikingly, when ReLU and B-SiLU join forces, they excel at generalization. Table 4: Test accuracy statistics for ResNet-18 and VGG-16 on CIFAR-10. Best model and the second best model are highlighted. Model Activation non-SUGAR ( %) SUGAR ( %) ResNet-18 ELU 77.91±0.49 83 .47±0.62 GELU 71.57±0.52 73 .89±0.39 LeakyReLU 73.08±0.33 73 .84±0.16 Mish 74.24±0.22 79 .19±0.25 B-SiLU 76.76±0.52 86 .42±0.33 ReLU 73.22±0.42 —– SELU 75.90±0.63 81 .95±0.21 NeLU( α= 0.01) 73.01±0.32 73 .08±0.42 NeLU( α= 0.05) 71.01±0.55 72 .12±0.41 NeLU( α=
https://arxiv.org/abs/2505.22074v1
0.1) 69.42±0.34 71 .09±0.59 Swish 73.91±0.49 73 .85±0.22 VGG-16 ELU 78.87±0.43 85.58±0.16 GELU 75.03±0.45 75 .86±0.25 LeakyReLU 75.74±0.71 76 .04±0.26 Mish 76.43±0.30 78 .98±0.33 B-SiLU 78.50±0.25 88 .35±0.26 ReLU 75.85±0.51 —– SELU 78.28±0.34 86.87±0.28 NeLU( α= 0.01) 75.54±0.30 75 .88±0.23 NeLU( α= 0.05) 74.61±0.19 75 .50±0.33 NeLU( α= 0.1) 73.19±0.53 74 .75±0.17 Swish 75.92±0.48 72 .81±0.48 16 Table 5: Test accuracy statistics for ResNet-18 and VGG-16 on CIFAR-100. Best model and the second best model are made salient with the corresponding colors. Model Activation non-SUGAR (%) SUGAR (%) ResNet-18 ELU 49.91±0.56 52 .38±0.37 GELU 41.55±0.43 44 .30±0.51 LeakyReLU 43.67±0.44 43 .41±0.28 Mish 44.61±0.32 48 .37±0.30 B-SiLU 48.99±0.78 56 .51±0.46 ReLU 43.38±0.57 —– SELU 48.73±0.80 52 .08±0.58 NeLU( α= 0.01) 42.82±0.82 43 .21±0.38 NeLU( α= 0.05) 41.52±0.67 42 .40±0.42 NeLU( α= 0.1) 40.42±0.39 41 .94±0.17 Swish 43.71±0.43 44 .05±0.26 VGG-16 ELU 48.76±0.40 58.09±0.40 GELU 42.12±0.11 43 .51±0.55 LeakyReLU 43.68±0.42 44 .11±0.30 Mish 44.96±0.37 47 .74±0.12 B-SiLU 48.73±0.21 64 .47±0.32 ReLU 43.48±0.36 —– SELU 48.34±0.11 61.20±0.50 NeLU( α= 0.01) 42.89±0.21 43 .18±0.28 NeLU( α= 0.05) 41.58±0.29 42 .85±0.20 NeLU( α= 0.1) 40.35±0.88 42 .19±0.47 Swish 44.39±0.44 39 .59±0.31 D Activation functions and their derivatives −4 −2 0 2 4−20246Activation Functions −4 −2 0 2 40.00.51.01.5Derivatives ELU GELU SiLU SELU Leaky ReLU B-SiLU ReLU Figure 8: Comparison of activation functions and their derivatives used in modern neural networks. The left plot shows the functional forms of ELU, GELU, SiLU, SELU, Leaky ReLU (with slope 0.2), B-SiLU, and ReLU. The right plot shows their respective derivatives. ReLU and B-SiLU are overlaid for visual prominence. Notably, non-linear smooth activations exhibit continuous derivatives, while ReLU and its variants introduce discontinuities or sharp transitions. Axis labels denote input xand activation output f(x)or its derivative f′(x). 17 E Additional plots from VGG-16 and ResNet-18 experiments This appendix presents additional plots to provide deeper insight into the convergence behavior, activation profile, and test accuracy observed in the experiments with VGG-16 and ResNet-18. E.1 Validation curve plots 0 10 20 30 40 500.6 0.8 1.0 1.2 1.4Validation lossnon-SUGAR 0 10 20 30 40 50CIFAR-10 – VGG-16SUGAR 0 10 20 30 40 500.6 0.8 1.0 1.2 1.4 1.6Validation loss 0 10 20 30 40 50CIFAR-10 – ResNet-18 0 10 20 30 40 501.5 2.0 2.5 3.0 3.5 4.0Validation loss 0 10 20 30 40 50CIFAR-100 – VGG-16 0 20 40 60 80 100 Epoch2.0 2.5 3.0 3.5 4.0Validation loss 0 20 40 60 80 100 EpochCIFAR-100 – ResNet-18 ELU (52.4±0.4%) GELU (44.3±0.5%) Leaky ReLU (43.4±0.3%) Mish (48.4±0.3%)NeLU (α=0.01) (43.2±0.4%) NeLU (α=0.05) (42.4±0.4%) NeLU (α=0.1) (41.9±0.2%) B-SiLU (56.5±0.5%)ReLU (43.4±0.6%) SELU (52.1±0.6%) Swish (44.1±0.3%) Figure 9: Mean validation -loss curves for CIFAR -10/100 on VGG -16 and ResNet -18 with confidence intervals. Each experiment pair (row) shares its own log -scaled y-axis; experiment titles are displayed vertically on the right-hand plot in each row. 18 E.2 Activation plots 0 10000 20000 30000 40000 Activation count02468101214 Layer index0.000.020.040.060.080.10 Norm. frequency (a) ReLU (CIFAR-10, VGG-16) 0 10000 20000 30000 40000 Activation count02468101214 Layer index0.000.020.040.060.08 Norm. frequency (b) SUGAR B-SiLU (CIFAR-10, VGG-16) 0 20000 40000 60000 80000 Activation
https://arxiv.org/abs/2505.22074v1
count02468 Layer index0.000.020.040.060.080.100.12 Norm. frequency (c) ReLU (CIFAR-100, ResNet-18) 0 20000 40000 60000 80000 Activation count02468 Layer index0.000.010.020.030.040.050.060.070.08 Norm. frequency (d) SUGAR B-SiLU (CIFAR-100, ResNet-18) 0 20000 40000 60000 80000 Activation count02468 Layer index0.000.020.040.060.080.100.120.140.16 Norm. frequency (e) ReLU (CIFAR-10, ResNet-18) 0 20000 40000 60000 80000 Activation count02468 Layer index0.000.010.020.030.040.050.060.07 Norm. frequency (f) SUGAR B-SiLU (CIFAR-10, ResNet-18) Figure 10: Activation profiles of ReLU and SUGAR B-SiLU for VGG-16 and ResNet-18 trained on CIFAR-10 and CIFAR-100. The x-axis shows the activation count per data sample, the y-axis indicates the layer index (with the final layer being fully connected), and the z-axis shows the normalized frequency, allowing for comparison across layers and activation functions. 19 E.3 Bar chart accuracies B-SiLUSELUELU Mish Leaky ReLUNeLU (α=0.01)GELU ReLU NeLU (α=0.05)NeLU (α=0.1)Swish020406080Test accuracy (%) 78.5% 88.4% 78.3% 86.9% 78.9% 85.6% 76.4% 79.0% 75.7% 76.0% 75.5% 75.9% 75.0% 75.9% 75.9% 74.6% 75.5% 73.2% 74.8% 75.9% 72.8%VGG-16 – CIFAR-10 Accuracy Baseline SUGAR Figure 11: Test accuracy of VGG-16 on CIFAR-10, comparing baseline to SUGAR. B-SiLUELUSELUMishGELU Swish Leaky ReLUReLU NeLU (α=0.01) NeLU (α=0.05)NeLU (α=0.1)020406080Test accuracy (%) 76.8% 86.4% 77.9% 83.5% 75.9% 81.9% 74.2% 79.2% 71.6% 73.9% 73.9% 73.9% 73.1% 73.8% 73.2% 73.0% 73.1% 71.0% 72.1% 69.4% 71.1%ResNet-18 – CIFAR-10 Accuracy Baseline SUGAR Figure 12: Test accuracy of ResNet-18 on CIFAR-10, comparing baseline to SUGAR. 20 B-SiLUELUSELUMishGELU Swish Leaky ReLUReLU NeLU (α=0.01) NeLU (α=0.05)NeLU (α=0.1)01020304050Test accuracy (%) 49.0% 56.5% 49.9% 52.4% 48.7% 52.1% 44.6% 48.4% 41.5% 44.3% 43.7% 44.1% 43.7% 43.4% 43.4% 42.8% 43.2% 41.5% 42.4% 40.4% 41.9%ResNet-18 – CIFAR-100 Accuracy Baseline SUGARFigure 13: Test accuracy of ResNet-18 on CIFAR-100, comparing baseline to SUGAR. 21 F Experimental setup for Swin Transformer The model is taken from the original work [ 23]. In this study, the tiny version of Swin Transformer (28M parameters) is used. Although our implementation is completely in line with [ 23], we provide detailed specifications regarding the training procedure here. Swin Transformer was trained on the Tiny ImageNet dataset with 200 classes. •Batch size: We used a batch size of 200 for training. •Validation split: 5%of the training set •Data augmentation: Several augmentation techniques are employed during training: –Color jitter with intensity 0.4 –AutoAugment policy: rand-m9-mstd0.5-inc1 –Random erasing with: *Probability: 0.25 *Mode: pixel *Count: 1 –Mixup with alpha 0.8 –Cutmix with alpha 1.0 –Mixup and Cutmix applied in batch mode with a switch probability of 0.5 •Number of workers: 8 •Optimizer: AdamW optimizer is used with the following parameters: –Betas: (0.9,0.999) –Epsilon: 1e-8 –Weight decay: 0.05 •LR schedule: A cosine learning rate scheduler was used, with: –Base learning rate: 5e-4 , scaled linearly with batch size and number of devices –Warmup learning rate: 5e-7 –Minimum learning rate: 5e-6 –Warmup epochs: 20 –Gradient clipping with max norm: 5.0 •Epochs: The model was trained for 300 epochs. •Repetitions: 5 independent runs per configuration with seeds [1,10,20,25,42] •Hardware: NVIDIA GeForce RTX 4090 25GB GPU (CUDA 12.4, Driver 550.144.03) 22 G Experimental setup for Conv2NeXt The model is taken from the original work [ 5]. Following the settings in [ 5], the base version of Conv2NeXt ( 7M parameters) is used. Although our implementation
https://arxiv.org/abs/2505.22074v1
is completely in line with [ 5], we provide detailed specifications regarding the training procedure here. torch.compile is applied with SUGAR. Conv2NeXt was trained on the CIFAR-100 dataset. •Batch size: A per-GPU batch size of 200 was used, with gradient accumulation over 4 step, yielding an effective batch size of 800. •Validation/training set: 50000 / 10000 as in default CIFAR-100 setting in torchvision. In accordance with [5], no test set was used. •Data augmentation: Following augmentation strategies were deployed: –AutoAugment: rand-m9-mstd0.5-inc1 –Color jitter: 0.4 –Random erasing with: *Probability: 0.25 *Mode: pixel *Count: 1 –Mixup: alpha=0.8 –Cutmix: alpha=1.0 –Combined Mixup/Cutmix applied in batch mode with switch probability 0.5 •Number of workers: 10 •Optimizer: AdamW optimizer is used with the following parameters: –Learning rate: 4e-3 –Weight decay: 0.05 –Epsilon: 1e-8 –Betas: default ( (0.9, 0.999) ) •LR schedule: A cosine learning rate scheduler was used: –Initial learning rate: 4e-3 –Minimum learning rate: 1e-6 –Warmup period: 20 epochs –Weight decay followed a cosine schedule as well •Epochs: Training was performed over 300 epochs. •Repetitions: 5 independent runs per configuration with seeds [1,10,20,25,42] •Hardware: NVIDIA H100 80GB GPU (CUDA 12.4, Driver 550.127.08) 23
https://arxiv.org/abs/2505.22074v1
arXiv:2505.22086v1 [cs.AR] 28 May 2025iDSE: Navigating Design Space Exploration in High-Level Synthesis Using LLMs Runkai Li∗ Southeast UniversityJia Xiong∗ Southeast UniversityXi Wang† Southeast University Abstract High-Level Synthesis (HLS) serves as an agile hardware development tool that streamlines the circuit design by abstracting the register transfer level into behav- ioral descriptions, while allowing designers to customize the generated microarchi- tectures through optimization directives. However, the combinatorial explosion of possible directive configurations yields an intractable design space. Traditional de- sign space exploration (DSE) methods, despite adopting heuristics or constructing predictive models to accelerate Pareto-optimal design acquisition, still suffer from prohibitive exploration costs and suboptimal results. Addressing these concerns, we introduce iDSE, the first LLM-aided DSE framework that leverages HLS de- sign quality perception to effectively navigate the design space. iDSE intelligently prunes the design space to guide LLMs in calibrating representative initial sampling designs, expediting convergence toward the Pareto front. By exploiting the conver- gent and divergent thinking patterns inherent in LLMs for hardware optimization, iDSE achieves multi-path refinement of the design quality and diversity. Extensive experiments demonstrate that iDSE outperforms heuristic-based DSE methods by 5.1×∼16.6×in proximity to the reference Pareto front, matching NSGA-II with only 4.6% of the explored designs. Our work demonstrates the transformative potential of LLMs in scalable and efficient HLS design optimization, offering new insights into multiobjective optimization challenges. 1 Introduction Transistor scaling has driven exponential performance improvements in modern circuits, yet it remains incapable of accommodating the escalating computational demands in fields such as neural networks [1–4], computer vision [ 5,6], robotics [ 7,8], and genome sequence analysis [ 9–12]. The development of domain-specific accelerators (DSAs) tailored to specific workloads has shown notable growth [13,14]. However, the protracted chip design and verification cycles impede the swift iteration of DSAs required to keep pace with the rapidly evolving dedicated application requirements. High-Level Synthesis (HLS) abstracts hardware description languages (HDL) into behavioral representation, accelerating FPGA-based prototyping with comparable circuit performance and improved hardware development efficiency [ 15,16]. Furthermore, most vendor HLS tools incorporate optimization directives to customize microarchitectures, synthesized from high-level programming languages, to align design requirements with quality of results (QoR). However, the extensive permutations of directive configurations constitute an expansive design space, rendering the identification of Pareto-optimal designs challenging and expertise-intensive, as illustrated in Figure 1. To capitalize the reconfigurability and scalability advantages of HLS and advance hardware optimiza- tion, recent research has focused on automated and efficient design space exploration (DSE) methods ∗Equal contribution. †Corresponding author: Xi Wang (xi.wang@seu.edu.cn). Preprint. Under review. Original HLS DesignHLS Compilation & SimulationSynthesized Hardware & Reported QoROptimized HLS Design.c .cpp.c .cppManual Refinement with Directive Tuning Directive Config rationQoR -Compliant HLS Design.v .vhdl.v .vhdlUtil.Lat. Util.Lat. .v .vhdlUtil.Lat. QoR Check.c .cpp.c .cpp.c .cpp.c .cpppipline unroll partition.c .cpppipline unroll partitionAllocation Scheduling Binding Allocation Scheduling Binding Allocation Scheduling Binding Design Space ExplorationOriginal HLS DesignHLS Compilation & SimulationSynthesized Hardware & Reported QoROptimized HLS Design.c .cppManual Refinement with Directive Tuning Directive Config rationQoR -Compliant HLS Design.v .vhdlUtil.Lat. QoR Check.c .cpp.c .cpppipline unroll partitionAllocation Scheduling Binding Design Space ExplorationOriginal HLS DesignHLS Compilation & SimulationSynthesized Hardware & Reported QoROptimized HLS Design.c
https://arxiv.org/abs/2505.22086v1
.cppManual Refinement with Directive Tuning Directive Config rationQoR -Compliant HLS Design.v .vhdlUtil.Lat. QoR Check.c .cpp.c .cpppipline unroll partitionAllocation Scheduling Binding Design Space ExplorationFigure 1: Time-consuming manual directive configuration tuning based on QoR reported in HLS. aimed at reducing manual intervention. However, the extensive combination of directives exponen- tially expands the design space. Furthermore, the time-consuming performance evaluation required by HLS tools renders exhaustive traversal impractical. Traditional DSE methods typically treat HLS as a black-box optimization problem, relying heavily on heuristics that necessitate multiple iterations to approximate Pareto-optimal solutions. This approach often results in prolonged exploration periods and inadequate coverage of optimal designs [ 17–22]. In response, machine learning (ML) and deep learning (DL) techniques have revisited DSE by leveraging predictive models to surrogate expensive HLS evaluations. These approaches significantly improve efficiency by enabling the evaluation of substantially more designs [ 23–31]. However, model-based methods typically involve significant training overhead and exhibit limited generalization across unseen workloads, HLS environments, or varied hardware constraints, potentially compromising the optimality of explored designs. Recently, Large Language Models (LLMs) have attracted considerable attention for their exceptional proficiency in natural language processing and code generation. However, their effectiveness in specialized applications is often hampered by limited domain-specific data. Previous efforts have harnessed the code generation and debugging capabilities of LLMs to streamline chip design and verification workflows, highlighting their potential to improve efficiency within electronic design automation (EDA) [ 32–36]. Despite these advancements, existing approaches have not sufficiently exploited LLMs to achieve a high degree of hardware optimization for high-quality circuits required by emerging applications. Recent research has demonstrated considerable promise in employing LLMs for gradient-free black-box optimization tasks [ 37–41]. Such approaches leverage the natural language reasoning capabilities of LLMs, enabling iterative and informed exploration of optimization trajectories through linguistic prompting. HLS abstracts hardware designs to the algorithmic level while ensuring functional equivalence, providing an opportunity to empower hardware optimization by directing LLMs towards the design structure, thus reducing hardware specification semantics. Furthermore, leveraging the inherent domain expertise and reasoning capabilities in LLMs, it becomes feasible to efficiently allocate operational parallelism in HLS-generated hardware. Building on this insight, we propose iDSE , a novel LLM-navigated DSE framework that reduces time costs and expertise barriers for HLS design optimization. This framework automates the end-to-end process by extracting HLS design features and pruning invalid designs, thus effectively distilling the design space. Furthermore, iDSE incorporates a warm-start mechanism, which initializes exploration with representative seed designs to accelerate convergence to the Pareto front. By employing operators inspired by evolutionary algorithms to spawn refined design candidates, iDSE capitalizes on prior knowledge to analyze design bottlenecks, identify optimization opportunities and refine designs. Quantitative metrics highlight that our framework shapes a broader and more concave Pareto front, providing a new perspective for promoting the customization of DSAs. The main contributions of this paper are as follows: •We introduce iDSE , the first end-to-end design space exploration framework integrating optimiza- tion trajectory awareness with prior expertise injection. This presents a compelling direction for conquering the intricate challenges of automated hardware optimization. •We propose a scalable
https://arxiv.org/abs/2505.22086v1
Feature-Driven Pruning approach to significantly expedite the DSE iterations by constructing a compact yet expressive HLS design space. •We reimagine DSE initialization through the LLM-guided Seed Directive Generation methodology. This approach enables warm-starts that rapidly converge toward comprehensive and concave Pareto front profiling by improving the quality and diversity of the initial sampling designs. •We introduce a novel QoR-Aware Adaptive Optimization system that exploits the convergent and divergent thinking capabilities of LLMs to navigate multi-path HLS design optimization. By perceiving QoR feedback, our system transcends existing bottlenecks and escapes local optima. 2 Utilization LatencyA B C[1] A[1]B[1]112233...8 1122 ......C[2]B[2] A[2]33...8A B C[1] A[1]B[1]123...8 12 ......C[2]B[2] A[2]3...8 for(int i=0; i<8; i++)// array declaration C[i]=A[i]*B[i];#pragma HLS array_partition #pragma HLS unroll// directive annotation mul: Latency:10 DSP:3 ...DSP:3 ...Latency:10 DSP:3 ...Optimized HLS Design DSP:24 ... Latency:4 DSP:24 ... Latency:41234...8 A B C[i]1234...8 A[i]B[i]1234...8 A B C[i]1234...8 A[i]B[i]Design Space Default -Pipelined ap_fixed <32,16> A[8]; ap_fixed <32,16> B[8]; ap_fixed <32,16> C[8]; for(int i=0; i<8; i++) C[i]=A[i]*B[i];Original HLS Design Fully -Parallelized Syn. QoRDirective s Syn. QoRUtilization LatencyA B C[1] A[1]B[1]123...8 12 ......C[2]B[2] A[2]3...8 for(int i=0; i<8; i++)// array declaration C[i]=A[i]*B[i];#pragma HLS array_partition #pragma HLS unroll// directive annotation mul: Latency:10 DSP:3 ...Optimized HLS Design DSP:24 ... Latency:41234...8 A B C[i]1234...8 A[i]B[i]Design Space Default -Pipelined ap_fixed <32,16> A[8]; ap_fixed <32,16> B[8]; ap_fixed <32,16> C[8]; for(int i=0; i<8; i++) C[i]=A[i]*B[i];Original HLS Design Fully -Parallelized Syn. QoRDirective s Syn. QoRFigure 2: Example of customizing synthesized hardware with HLS optimization directives. Left: Default loop read-compute-write pipelining structure without specifying any directives. Middle: QoR mapping across the design space. Right: Fully parallelized memory access and loop operations. •Our extensive experiments across diverse HLS benchmarks demonstrate 5.1×∼16.6×improve- ment in explored Pareto front quality over heuristic-based DSE methods, with up to 25.1×higher exploration efficiency. This work reveals the unique potential of LLM-aided hardware optimization. 2 Background & Related Work High-Level Synthesis Design Space Exploration. HLS employs optimization directives to transform high-level description languages (C/C++/System C) into specific microarchitectures through hardware resource allocation, and operation scheduling/binding. While designers typically prioritize Pareto- optimal solutions that satisfy certain constraints, the exponentially growing design space compiled by the Cartesian product of directives renders brute-force traversal computationally infeasible, particularly given the time-consuming quality of results (QoR) evaluation for each configuration. Figure 2 illustrates a toy example for implementing and optimizing a vector Hadamard product in HLS. Although this HLS design contains only one loop and three arrays, considering the loop and memory access parallelization directives supported in this paper, where the factors for loop unrolling and memory partitioning are divisors of loop and array boundaries, we still obtain 1.58M valid designs composed of various combinations and parameters of directives. Exhaustively evaluating all designs would require approximately 3 years , assuming only one minute per configuration evaluation. Meta-heuristic [ 17–20,42] and dedicated heuristic [ 22,43–45] DSE methods treat the HLS tool as a black box, leveraging hardware optimization characteristics to efficiently approximate Pareto-optimal solutions. While exhaustive searches are avoided, these methods still necessitate multiple invocations of the HLS tool to guide optimization trajectories. Moreover, their effectiveness remains constrained by initial sampling quality, limiting
https://arxiv.org/abs/2505.22086v1
exploration within a confined design space under restricted search budgets. The introduction of ML and DL methods has driven the construction of analysis models for QoR prediction [ 27,28,46,47]. By providing surrogate models for HLS tool evaluations, these methods enable performance and resource utilization predictions for the synthesized hardware. Graph neural networks (GNNs) embed program nodes and directive configurations have notably improved prediction accuracy [ 24–26,48]. However, ML/DL-based approaches demand substantial training and deployment costs, exhibit limited generalization across different applications and versions of vendor HLS tools. Additionally, these methods often focus on optimizing a single performance metric, thereby overlooking the full spectrum of optimization requirements. LLM-Aided Hardware Design. Integration of LLMs has catalyzed the evolution of EDA tools [32–35,49–58]. However, current efforts struggle to keep pace with the rapidly evolving appli- cation requirements, and achieving modern high-performance computing architectures remains challenging, necessitating breakthroughs in LLM-aided hardware optimization. Previous research on LLMs applying to HLS has examined their capability to insert directives into source code, utilizing knowledge-augmented technology to bridge the expertise gap in HLS [ 59–63]. However, the results have been underwhelming, with code transformations frequently introducing synthesis errors. Fur- thermore, existing studies primarily target performance-optimal designs, identifying HLS designs across the Pareto front to accommodate diverse optimization preferences remains nascent [ 64,65]. This research gap obscures the practical implementation of LLMs in hardware optimization contexts. Machine Learning in Multiobjective Optimization. Different from conventional multiobjec- tive optimization methods [ 66–73], recent research has demonstrated significant improvements in heuristic optimization [ 41,74–78] and neural architecture search [ 79–84] by embedding LLMs into evolutionary algorithms (EA). This approach outperforms manual tuning and traditional automated approaches, highlighting the promise of LLM-driven multiobjective optimization. Some research 3 has employed LLM-enhanced EA operators in conjunction with GNN-based predictive models for DSE [ 85]. However, substituting actual evaluation with regression models inevitably compromises performance. Meanwhile, existing LLM implementations of EA often yield suboptimal results due to inadequate reflection on optimization trajectories and a lack of task-specific guidance for DSE. 3 Preliminary & Problem Formulation Balancing computation and memory access parallelism under hardware resource constraints to achieve satisfactory circuit performance is a delicate process. Since performance improvements and resource consumption are often contradictory, DSE constitutes a multiobjective optimization problem. We focus on two primary objectives in DSE: execution latency andresource utilization . For an HLS design λ(φ),φrepresents the inserted optimization directives, this work emphasizes three directives, PIPELINE (LP),UNROLL (LU),ARRAY_PARTITION (AP), which control loop execution and memory access parallelism. Define φas a feature vector φ= [LPi,LUj,APk,d], where LPi is a boolean used to enable loop ipipelining, LUjis the unroll factor for loop j, andAPk,dis the partition type and factor for array kalong dimension d. Under vendor HLS tool H, we analyze the QoR of the explored design using latency Lat(H, λ(φ))and resource utilization Util(H, λ(φ)). Definition 1 (Multiobjective Optimization of DSE). The multiobjective DSE task is defined as: λ(φ∗) = arg min φ∈Φ⊂Zn[Lat(H, λ(φ)), Util (H, λ(φ))] (1) We aim to optimize both objectives without significantly compromising either. The goal of DSE is to rapidly and accurately search the
https://arxiv.org/abs/2505.22086v1
design space Φfor Pareto-optimal designs. Definition 2 (Pareto-Optimal Designs). The explored designs λ(φ∗)andλ(φa)if: Lat(H, λ(φ∗))≤Lat(H, λ(φa)), Util (H, λ(φ∗))≤Util(H, λ(φa)) (2) We call it λ(φ∗)dominates λ(φa). If no other φ∈Φdominates λ(φ∗), then λ(φ∗)is called a Pareto-optimal design. All such designs form the Pareto front. Definition 3 (Effectiveness of DSE). We employ the average distance to reference set (ADRS) metric to quantify the gap d(·)between an explored Pareto front PEand a reference Pareto front PR[86]: ADRS (PE, PR) =1 |PR|X λ(φγ)∈PRmin λ(φω)∈PEd(λ(φγ), λ(φω)) (3) A small ADRS indicates that the explored designs more effectively approximate the entire refer- ence Pareto front. Achieving a small ADRS within constrained search budgets demonstrates the performance and efficiency of the proposed DSE method. 4 iDSE Design & Philosophy Our framework, iDSE , automates the exploration of design spaces to optimize HLS designs with LLM as the backbone, effectively identifying Pareto-optimal designs while balancing competing objectives. The workflow of iDSE, depicted in Figure 3, unfolds in three distinct stages: 1) Preprocessing. iDSE first extracts a unified design space from the provided HLS design and employs specific pruning strategies to eliminate invalid directive configurations that may result in ineffective or inefficient synthesis. This step defines a feasible domain and narrows the parameter ranges to be searched, thereby improving the efficiency of approximating the Pareto front. 2) Warm-Start. iDSE then leverages the LLM to reproduce the learned insights to identify directive configurations with optimization potential from the expansive design space, providing high-quality and discrete sampling designs to warm-start subsequent optimization. 3) Adaptive Optimization. To further approximate the Pareto front, iDSE performs a bottleneck analysis on initial sampling designs, guided by empirical optimization knowledge, which prompts LLM to conduct oriented reasoning for localized design refinement. Furthermore, iDSE exploits the divergent thinking of LLMs to escape limitations of existing optimization strategies and propose novel optimization directions, thus expanding the exploration scope and improving design diversity. 4 HLS -C HLS -C</> HLS -C</> Preprocessing Warm -Start Adaptive OptimizationInvalid Config. PruningDesign StructureCombination and Parameter JSONJSONJSON JSONJSON Feature Vector HLS -C HLS -C HLS -C</> HLS -C HLS -C</> Lat. Util.Lat. Util.Directive InsertionJSONJSONJSONpipline unroll partition pipline unroll partition pipline unroll partition[off,on]→[0,1] factor→[0,2,...] [cyclic,...] →[0,...]Design RefactoringDesign Refactoring Iterative Refinement Pruned Design SpacePruned Design SpaceBottleneck AnalysisBottleneck AnalysisBottleneck Analysis 44 55Directive Config. Directive Config. Feature ExtractorFeature Extractor HLS -C</> HLS -C</> HLS -C</> HLS -C</> HLS -C</>Sampled DesignSampled DesignRefined DesignRefined DesignLLM Agents Data Flow Iteration HLS Synthesis & Simulation AA BB CCFigure Legends : Oriented TuningNon-Oriented Tuning Original DesignOriginal Design QoR AnalysisQoR Analysis Pareto -Optimal DesignsPareto -Optimal DesignsUtilization LatencyDesign Space Utilization LatencyDesign SpaceOptimization Trajectory ReflectionOptimization Trajectory Reflection 33 QoR Analysis ×××× 111Feature -Driven Pruning 1Feature -Driven Pruning222Seed Directive Generation 2Seed Directive Generation HLS -C</> Preprocessing Warm -Start Adaptive OptimizationInvalid Config. PruningDesign StructureCombination and Parameter JSONJSON Feature Vector HLS -C HLS -C</> Lat. Util.Directive InsertionJSONJSONpipline unroll partition pipline unroll partition[off,on]→[0,1] factor→[0,2,...] [cyclic,...] →[0,...]Design Refactoring Iterative Refinement Pruned Design SpaceBottleneck Analysis 4 5Directive Config. Feature Extractor HLS -C</> HLS -C</>Sampled DesignRefined DesignLLM Agents Data Flow Iteration HLS Synthesis & Simulation A B CFigure Legends : Oriented TuningNon-Oriented Tuning Original Design
https://arxiv.org/abs/2505.22086v1
QoR Analysis Pareto -Optimal DesignsUtilization LatencyDesign SpaceOptimization Trajectory Reflection 3 QoR Analysis × 1Feature -Driven Pruning2Seed Directive GenerationFigure 3: iDSE Workflow 4.1 Preprocessing: Directive Configuration Extraction and Design Space Pruning The allocation of loop and memory access parallelism constitutes a critical bottleneck in hardware op- timization, significantly affecting both performance and resource utilization of synthesized hardware designs. Feature Extractor parses the design structure to extract key structural metadata (e.g., array dimensions, loop trip counts) that determine feasible parallelism degrees for optimization directive combinations and parameters. However, for large design footprints, this may introduce aggressive parallelism that makes the generated optimization strategy infeasible. Building upon these extracted features, we propose the Feature-Driven Pruning approach, which eliminates invalid directive configurations that may lead to excessive synthesis time or even failure, thus effectively compressing the design space. This approach prompts the LLM to adopt specific pruning strategies that intelligently identify and remove aggressive parallelism and invalid directive combinations. For example, when examining nested loops with multilayer structures or large inner loop trip counts, judiciously disabling the pipeline option for the outer loop effectively eliminates about half of the invalid designs (pruning strategies are detailed in Appendix A.2). Furthermore, we provide a prompt interface that allows designers to customize their pruning strategies according to specific hardware resource tolerance. Importantly, this method preserves potential Pareto-optimal designs, which would not be extensively missed due to aggressive pruning strategies. 4.2 Warm-Start: LLM-Guided Design Space Exploration Initialization LLMs can effectively leverage prior knowledge of hardware optimization by emulating the reasoning patterns of seasoned engineers, as similar directives embedded in analogous design structures typically yield comparable QoR [ 87,88]. We propose an LLM-guided Seed Directive Generation mechanism, which first generates a formatted feature vector of seed directive configuration, and then automatically creates a Tcl script to embed directives into the original HLS design. Meanwhile, this structured directive configuration enables seamless integration with heuristic-based DSE methods. To ensure high-quality initial sampling designs, we establish specific design principles to guide the LLM in generating directive configurations that can be executed efficiently. This approach prevents the LLM from simply refluxing trivial solutions drawn from its pretraining data (detailed prompt design available in Appendix B). Our initialization strategy begins by defining the sampling task with universal optimization objectives: prioritizing performance, prioritizing resource utilization, and balancing both considerations. By providing the LLM with the structured directive configurations derived from the preprocessing stage, we enable it to generate a prescribed number of diverse directive combinations that cover distinct regions of the design space. An automation tool then parses the execution latency and resource utilization metrics from the HLS synthesis report. 4.3 Adaptive Optimization: Multi-Path Directive Tuning with QoR Perception Current LLMs exhibit performance degradation when handling lengthy contexts and face difficulties when scaling to extensive design spaces. Furthermore, initial sampling alone inadequately captures the mapping between directive allocation strategies and their effectiveness. To address these limitations, 5 Algorithm 1 Adaptive Optimization for HLS DSE Input: LLM πθ, pruned design space Φ, directive configuration φ, HLS design λ(φ), initial sampling size N0, maximum number of generations Imax, adaptive
https://arxiv.org/abs/2505.22086v1
population size Pi, vendor HLS tool H, Output: Pareto-optimal designs λ(φ∗) 1: Initialize P0withφinit←WARM START (πθ,Φ, N0, λ(φ)) ▷Section 4.2 2: Evaluate quality of results Qof initial sampling HLS designs λ(φinit)usingH 3:fori←0 toImaxdo 4: Label population Piwith rank and crowding distance 5: λ(φelite)←SEL(πθ,Q, Pi) ▷Optimization trajectory reflection 6: φc←CONVERGENT SEARCH (πθ,Φ, λ(φelite)) ▷Oriented tuned directive configurations φc 7: φd←DIVERGENT SEARCH (πθ, λ(φelite)) ▷Non-oriented tuned directive configurations φd 8: Embed φcandφdinto original HLS design λ(φ), evaluate Qofλ(φc)andλ(φd) 9: Adaptive population management for Pi 10:end for 11: Select Pareto-optimal designs λ(φ∗)through non-dominated sorting of all reported Q 12:return λ(φ∗) we propose QoR-Aware Adaptive Optimization system that integrates LLM-based optimization trajectory reflection, bottleneck analysis, and design refactoring for HLS design refinement. This system leverages convergent and divergent thinking capabilities of LLMs to expedite Pareto-optimal design acquisition while achieving broader design space coverage. Algorithm 1 details this process. Optimization Trajectory Reflection. Following the Warm-Start phase (lines 1-2), we construct the initial population utilizing the elite individual selection strategy resembling the NSGA-II framework [89], which prioritizes superior designs while maintaining population diversity, ensuring both cover- age and uniform distribution along the explored Pareto front (line 4). For labeled designs and their QoR, we prompt the LLM to perform reflection on the optimization trajectory, thus facilitating the sensible selection of directive configurations with optimization potential. Subsequently, a lightweight analysis examines the selected designs for data dependencies that potentially block pipeline/unroll optimization, memory access patterns to verify alignment between memory partitioning parallelism, and resource saturation to determine hardware constraint compatibility (line 5). This process illus- trates how LLMs explore diverse optimization directions during the search process while identifying appropriate niches for trade-offs or specialized optimizations. Bottleneck Analysis with QoR-Aware Adaptation. When design goals remain elusive, experts adopt a systematic approach to identify critical bottlenecks and take advantage of potential optimization opportunities rather than discard existing work altogether. The optimization trajectory reflection provides valuable insights that guide subsequent LLM reasoning about promising design refinement. Initially, the LLM classifies designs as compute-bound or memory-bound based on QoR (line 6). For the compute-bound, the LLM progressively enhances unrolling granularity in performance-critical loops, guided by optimization trajectory analysis. For the memory-bound, the LLM adjusts memory partition factors to meet or exceed the corresponding data access unroll factors while examining nested loops to implement coarse-grained pipelining toward outer loops. Divergence-Enhanced Design Refactoring. LLMs often show hesitance to extrapolate beyond established examples and venture into unexplored design territories. To enhance exploration coverage across the entire design space, we prompt the LLM to scrutinize current hardware optimization strategies and develop novel directive configurations different from previous iterations (line 7). Additionally, we specified the rule of first coarsely estimating the lower bound of the initiation intervals, and then applying the pipelining strategy to the innermost loop in the nested loop, while dynamically adapting memory partition factor to match loop operation characteristics. Through the shuffle of directive combinations under specific hardware optimization principles, LLM implements innovative optimization strategies for non-oriented refinement of the original design. 5 Evaluations & Discussions In this section, we evaluate the effectiveness of iDSE
https://arxiv.org/abs/2505.22086v1
in exploring Pareto-optimal designs that satisfy diverse optimization preferences. We selected 12 HLS benchmarks with varying functionality and design space dimensions |Φ|from PolyBench [ 90], CHStone [ 91], and MachSuite [ 92]. Diverse 6 Table 1: Comparison of iDSE effectiveness over baseline DSE methods. HLS Design ADRS of Pareto-Optimal Designs Explored by Different DSE Methods Benchmark |Φ| # Directives NSGA-II ACO MOEA/D Lattice HGBO-DSE iDSE(ours) atax 4.2M 13 2.3900 2.1073 0.8322 1.4974 0.3070 0.0355 bicg 0.9M 12 1.0109 0.2511 0.4540 4.6001 0.2429 0.0497 gemm 38.5M 14 1.1061 0.6338 0.4611 3.9707 0.4710 0.1039 gesummv 12.6M 11 0.6498 0.3935 0.3813 1.5178 0.3549 0.0231 mvt 3.7M 14 1.8678 1.8678 2.1441 1.9295 0.5934 0.0497 md-knn 33.6M 11 0.0217 0.0250 0.0245 0.0115 0.0068 0.0118 spmv 0.3M 8 0.4744 0.9045 0.3578 0.1258 0.0670 0.0126 stencil2d 39.0K 11 1.7985 0.8223 0.3248 1.0410 0.3955 0.0461 stencil3d 58.7M 21 1.7009 0.6733 0.4468 1.5011 0.4600 0.1646 viterbi 55.7M 21 0.1109 0.1743 0.2272 0.0382 0.0316 0.0022 sha 12.3K 8 0.3658 0.3287 0.2744 2.0663 0.2749 0.0945 autocorr 27.6K 10 0.0883 0.0847 0.0621 0.0093 0.0486 0.0168 Avg Improv. over NSGA-II 1 1.5376 × 2.0724× 2.0451× 3.7021× 25.9987× Geo Mean Improv. over NSGA-II 1 1.3048 × 1.6839× 1.0876× 3.2531× 16.5955× /uni00000013/uni00000011/uni00000013 /uni00000013/uni00000011/uni00000014/uni00000013/uni00000011/uni00000013/uni00000013/uni00000013/uni00000011/uni00000013/uni00000015/uni00000013/uni00000011/uni00000013/uni00000017/uni00000013/uni00000011/uni00000013/uni00000019 /uni00000044/uni00000057/uni00000044/uni0000005b /uni00000013/uni00000011/uni00000013/uni00000013 /uni00000013/uni00000011/uni00000013/uni00000018 /uni00000013/uni00000011/uni00000014/uni00000013/uni00000013/uni00000011/uni00000013/uni00000013/uni00000013/uni00000011/uni00000013/uni00000015/uni00000013/uni00000011/uni00000013/uni00000017/uni00000013/uni00000011/uni00000013/uni00000019/uni00000013/uni00000011/uni00000013/uni0000001b /uni00000045/uni0000004c/uni00000046/uni0000004a /uni00000013/uni00000011/uni00000013 /uni00000013/uni00000011/uni00000015 /uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000013/uni00000013/uni00000013/uni00000011/uni00000013/uni00000015/uni00000013/uni00000011/uni00000013/uni00000017/uni00000013/uni00000011/uni00000013/uni00000019/uni00000013/uni00000011/uni00000013/uni0000001b /uni0000004a/uni00000048/uni00000050/uni00000050 /uni00000013/uni00000011/uni00000013 /uni00000013/uni00000011/uni00000014 /uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000013/uni00000013/uni00000013/uni00000011/uni00000013/uni00000018/uni00000013/uni00000011/uni00000014/uni00000013/uni00000013/uni00000011/uni00000014/uni00000018 /uni0000004a/uni00000048/uni00000056/uni00000058/uni00000050/uni00000050/uni00000059 /uni00000013/uni00000011/uni00000013 /uni00000013/uni00000011/uni00000014 /uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000014/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000016 /uni00000050/uni00000059/uni00000057 /uni00000013/uni00000011/uni00000013/uni00000013 /uni00000013/uni00000011/uni00000013/uni00000018 /uni00000013/uni00000011/uni00000014/uni00000013 /uni00000013/uni00000011/uni00000014/uni00000018/uni00000013/uni00000011/uni00000014/uni00000013/uni00000013/uni00000011/uni00000014/uni00000018/uni00000013/uni00000011/uni00000015/uni00000013/uni00000013/uni00000011/uni00000015/uni00000018/uni00000013/uni00000011/uni00000016/uni00000013 /uni00000050/uni00000047/uni00000010/uni0000004e/uni00000051/uni00000051 /uni00000013/uni00000011/uni00000013 /uni00000013/uni00000011/uni00000014/uni00000013/uni00000011/uni00000013/uni00000013/uni00000013/uni00000011/uni00000013/uni00000015/uni00000013/uni00000011/uni00000013/uni00000017/uni00000013/uni00000011/uni00000013/uni00000019/uni00000031/uni00000052/uni00000055/uni00000050/uni00000044/uni0000004f/uni0000004c/uni0000005d/uni00000048/uni00000047/uni00000003/uni00000038/uni00000057/uni0000004c/uni0000004f/uni0000004c/uni0000005d/uni00000044/uni00000057/uni0000004c/uni00000052/uni00000051 /uni00000056/uni00000053/uni00000050/uni00000059 /uni00000013/uni00000011/uni00000013/uni00000013 /uni00000013/uni00000011/uni00000013/uni00000015 /uni00000013/uni00000011/uni00000013/uni00000017/uni00000013/uni00000011/uni00000013/uni00000013/uni00000013/uni00000011/uni00000013/uni00000018/uni00000013/uni00000011/uni00000014/uni00000013/uni00000013/uni00000011/uni00000014/uni00000018/uni00000013/uni00000011/uni00000015/uni00000013 /uni00000056/uni00000057/uni00000048/uni00000051/uni00000046/uni0000004c/uni0000004f/uni00000015/uni00000047 /uni00000013/uni00000011/uni00000013 /uni00000013/uni00000011/uni00000018 /uni00000014/uni00000011/uni00000013 /uni00000031/uni00000052/uni00000055/uni00000050/uni00000044/uni0000004f/uni0000004c/uni0000005d/uni00000048/uni00000047/uni00000003/uni0000002f/uni00000044/uni00000057/uni00000048/uni00000051/uni00000046/uni0000005c/uni00000013/uni00000011/uni00000013/uni00000013/uni00000013/uni00000011/uni00000013/uni00000018/uni00000013/uni00000011/uni00000014/uni00000013/uni00000013/uni00000011/uni00000014/uni00000018 /uni00000056/uni00000057/uni00000048/uni00000051/uni00000046/uni0000004c/uni0000004f/uni00000016/uni00000047 /uni00000013/uni00000011/uni00000013/uni00000013 /uni00000013/uni00000011/uni00000015/uni00000018 /uni00000013/uni00000011/uni00000018/uni00000013/uni00000013/uni00000011/uni00000013/uni00000018/uni00000013/uni00000011/uni00000014/uni00000013/uni00000013/uni00000011/uni00000014/uni00000018/uni00000013/uni00000011/uni00000015/uni00000013/uni00000013/uni00000011/uni00000015/uni00000018 /uni00000059/uni0000004c/uni00000057/uni00000048/uni00000055/uni00000045/uni0000004c /uni00000013/uni00000011/uni00000015/uni00000018 /uni00000013/uni00000011/uni00000018/uni00000013 /uni00000013/uni00000011/uni0000001a/uni00000018/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b /uni00000056/uni0000004b/uni00000044 /uni00000013/uni00000011/uni00000015/uni00000018 /uni00000013/uni00000011/uni00000018/uni00000013 /uni00000013/uni00000011/uni0000001a/uni00000018/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b /uni00000044/uni00000058/uni00000057/uni00000052/uni00000046/uni00000052/uni00000055/uni00000055 /uni0000002f/uni00000044/uni00000057/uni00000057/uni0000004c/uni00000046/uni00000048/uni00000003/uni0000000b/uni00000061/uni00000018/uni00000014/uni00000014/uni00000003/uni00000036/uni0000005c/uni00000051/uni00000057/uni0000004b/uni00000048/uni00000056/uni0000004c/uni00000056/uni0000000c /uni0000002b/uni0000002a/uni00000025/uni00000032/uni00000010/uni00000027/uni00000036/uni00000028/uni00000003/uni0000000b/uni00000061/uni00000014/uni00000013/uni00000013/uni0000000c /uni00000031/uni00000036/uni0000002a/uni00000024/uni00000010 /uni00000003/uni0000000b/uni00000061/uni00000014/uni00000013/uni0000001b/uni0000000c /uni00000024/uni00000026/uni00000032/uni00000003/uni0000000b/uni00000061/uni00000014/uni00000013/uni0000001b/uni0000000c /uni00000030/uni00000032/uni00000028/uni00000024/uni00000012/uni00000027/uni00000003/uni0000000b/uni00000061/uni00000014/uni00000013/uni00000013/uni0000000c /uni0000004c/uni00000027/uni00000036/uni00000028/uni00000003/uni0000000b/uni00000061/uni00000016/uni00000015/uni0000000c Figure 4: Comparison of explored Pareto fronts across benchmarks with different design spaces. iDSE converges to comprehensive and concave shapes with fewer search budgets ( # Synthesis ). design structures and memory footprints demonstrate the robust generalization of our approach. All synthesis was performed on Vitis HLS 2022.1 [ 93] targeting the Xilinx ZCU106 MPSoC platform. We compared iDSE against traditional DSE methods, including evolutionary algorithms (NSGA-II [89] and MOEA/D) [ 68], swarm intelligence techniques (ACO) [ 94], and state-of-the-art approaches including Lattice [ 21] with guided local exploration and HGBO-DSE, a Bayesian optimization based on MOTPE [ 31]. iDSE consistently outperformed these methods by exploring more comprehensive and impressive Pareto fronts. Experimental results highlight the capability of iDSE as an LLM- navigated design space exploration approach that effectively unleashes the potential of LLMs in hardware optimization. The complete definitions of the benchmark design space and hyperparameter settings for heuristic-based DSE methods are detailed in Appendix C. 5.1 Navigating Design Space Exploration: Elegant and Swift Approximation of Pareto Front We evaluated the capability of iDSE to approximate reference Pareto fronts using ADRS (complete definition is elaborated in Appendix C.1). To ensure practicality, we extensively sampled design spaces using random sampling and specialized breadth-first search to construct the reference Pareto fronts. For smaller designs ( stencil2d ,autocorr , and sha), we performed exhaustive exploration to construct strong reference Pareto fronts. All DSE methods explored design spaces compressed by ourFeature-Driven Pruning method to ensure fair comparison. Table 1 demonstrates the superior performance of iDSE across most benchmarks. iDSE achieves geometric mean improvements of
https://arxiv.org/abs/2505.22086v1
16.6×over NSGA-II, 12.7 ×over ACO, 9.9 ×over MOEA/D, 15.3 ×over Lattice and 5.1 ×over HGBO-DSE. While Lattice achieved optimal performance in autocorr through local search traversal, its sensitivity to initial sampling designs and lack of global perspective limit its effectiveness in exploring trade-off curves in large memory footprints and complex design structures. Similarly, HGBO-DSE outperforms other baselines but still faces limitations when confronting vast design spaces without prior knowledge guidance. Other DSE methods compressed exploration efficiency under limited search budgets. We present full experimental details in Appendix D. 7 Table 3: ADRS comparison of heuristic-based DSE methods with different initial sampling. NSGA-II MOEA/D ACO Benchmark RS BS LHS Warm-Start RS BS LHS Warm-Start RS BS LHS atax 2.3900 1.1054 0.5737 0.3116 0.8322 0.4583 0.3057 0.1762 2.1073 0.9770 0.7917 bicg 1.0109 0.3685 1.0765 0.2428 0.4540 0.2588 0.2573 0.2758 0.2511 0.3938 0.2570 gemm 1.1061 0.5100 1.0581 0.4581 0.4611 0.3768 0.3664 0.2976 0.6338 0.4839 0.5489 gesummv 0.6498 0.6508 0.9751 0.5575 0.3813 0.4097 0.3103 0.2640 0.3935 0.4165 0.3682 mvt 1.8678 0.8544 0.6403 0.4193 2.1441 1.6956 0.5464 0.4976 1.8678 1.2207 0.8978 md-knn 0.0217 0.0208 0.0064 0.0064 0.0245 0.0184 0.0064 0.0116 0.0250 0.0218 0.0064 spmv 0.4744 1.0585 0.8129 0.0592 0.3578 0.8841 0.2643 0.1175 0.9045 0.9528 0.5974 stencil2d 1.7985 0.4616 1.2631 0.3622 0.3248 0.2991 0.3789 0.2805 0.8223 0.9477 1.3129 stencil3d 1.7009 0.6244 1.5550 0.1780 0.4468 0.6172 0.2057 0.2891 0.6733 0.6327 0.6682 viterbi 0.1109 0.1069 0.0300 0.0158 0.2272 0.1421 0.0328 0.0540 0.1743 0.1362 0.1351 sha 0.3658 0.1626 0.7494 0.1521 0.2744 0.0871 0.2060 0.2095 0.0847 0.2313 0.2571 autocorr 0.0883 0.0702 0.0931 0.7458 0.0621 0.0901 0.0901 0.0631 0.3287 0.0704 0.0642 Avg 1 1.9097 × 1.7808 × 4.6109 × 2.0724 × 2.5986 × 3.7184 × 4.2138 × 1.7401 × 1.6765 × 2.0451 × Geo Mean 1 1.6508 × 1.3656 × 3.1709 × 1.6839 × 1.9743 × 3.1445 × 3.4065 × 1.3048 × 1.5094 × 1.8210 × * Experiments exclude ACO with Warm-Start because swarm intelligence struggles to leverage the advantage of seed directive configurations, detailed results in Appendix D.3. The ADRS are calculated from the average results of 5 test runs, and Warm-Start are the average results of 5 rounds of DeepSeek-R1 invocations. /uni00000015/uni00000018 /uni00000018/uni00000013 /uni0000001a/uni00000018/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000044/uni00000057/uni00000044/uni0000005b/uni0000002c/uni00000051/uni0000004c/uni00000057/uni0000004c/uni00000044/uni0000004f/uni00000003/uni00000003/uni00000003/uni00000036/uni00000044/uni00000050/uni00000053/uni0000004f/uni0000004c/uni00000051/uni0000004a /uni00000015/uni00000018 /uni00000018/uni00000013 /uni0000001a/uni00000018/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000045/uni0000004c/uni00000046/uni0000004a/uni0000002c/uni00000051/uni0000004c/uni00000057/uni0000004c/uni00000044/uni0000004f/uni00000003/uni00000003/uni00000003/uni00000036/uni00000044/uni00000050/uni00000053/uni0000004f/uni0000004c/uni00000051/uni0000004a /uni00000015/uni00000018 /uni00000018/uni00000013 /uni0000001a/uni00000018/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni0000004a/uni00000048/uni00000050/uni00000050/uni0000002c/uni00000051/uni0000004c/uni00000057/uni0000004c/uni00000044/uni0000004f/uni00000003/uni00000003/uni00000003/uni00000036/uni00000044/uni00000050/uni00000053/uni0000004f/uni0000004c/uni00000051/uni0000004a /uni00000015/uni00000018 /uni00000018/uni00000013 /uni0000001a/uni00000018/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni0000004a/uni00000048/uni00000056/uni00000058/uni00000050/uni00000050/uni00000059/uni0000002c/uni00000051/uni0000004c/uni00000057/uni0000004c/uni00000044/uni0000004f/uni00000003/uni00000003/uni00000003/uni00000036/uni00000044/uni00000050/uni00000053/uni0000004f/uni0000004c/uni00000051/uni0000004a /uni00000015/uni00000018 /uni00000018/uni00000013 /uni0000001a/uni00000018/uni00000013/uni00000011/uni00000015/uni00000018/uni00000013/uni00000011/uni00000018/uni00000013/uni00000013/uni00000011/uni0000001a/uni00000018/uni00000014/uni00000011/uni00000013/uni00000013/uni00000014/uni00000011/uni00000015/uni00000018/uni00000050/uni00000059/uni00000057/uni0000002c/uni00000051/uni0000004c/uni00000057/uni0000004c/uni00000044/uni0000004f/uni00000003/uni00000003/uni00000003/uni00000036/uni00000044/uni00000050/uni00000053/uni0000004f/uni0000004c/uni00000051/uni0000004a /uni00000015/uni00000018 /uni00000018/uni00000013 /uni0000001a/uni00000018/uni00000013/uni00000011/uni00000013/uni00000013/uni00000013/uni00000011/uni00000013/uni00000015/uni00000013/uni00000011/uni00000013/uni00000017/uni00000013/uni00000011/uni00000013/uni00000019/uni00000013/uni00000011/uni00000013/uni0000001b/uni00000013/uni00000011/uni00000014/uni00000013/uni00000050/uni00000047/uni00000010/uni0000004e/uni00000051/uni00000051/uni0000002c/uni00000051/uni0000004c/uni00000057/uni0000004c/uni00000044/uni0000004f/uni00000003/uni00000003/uni00000003/uni00000036/uni00000044/uni00000050/uni00000053/uni0000004f/uni0000004c/uni00000051/uni0000004a /uni00000013 /uni00000015/uni00000013 /uni00000017/uni00000013 /uni00000019/uni00000013/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000030/uni00000048/uni00000044/uni00000051/uni00000003/uni00000024/uni00000027/uni00000035/uni00000036/uni00000056/uni00000053/uni00000050/uni00000059/uni0000002c/uni00000051/uni0000004c/uni00000057/uni0000004c/uni00000044/uni0000004f/uni00000003/uni00000003/uni00000003/uni00000036/uni00000044/uni00000050/uni00000053/uni0000004f/uni0000004c/uni00000051/uni0000004a /uni00000015/uni00000018 /uni00000018/uni00000013 /uni0000001a/uni00000018/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000056/uni00000057/uni00000048/uni00000051/uni00000046/uni0000004c/uni0000004f/uni00000015/uni00000047/uni0000002c/uni00000051/uni0000004c/uni00000057/uni0000004c/uni00000044/uni0000004f/uni00000003/uni00000003/uni00000003/uni00000036/uni00000044/uni00000050/uni00000053/uni0000004f/uni0000004c/uni00000051/uni0000004a /uni00000015/uni00000018 /uni00000018/uni00000013 /uni0000001a/uni00000018 /uni00000006/uni00000003/uni00000052/uni00000049/uni00000003/uni00000036/uni0000005c/uni00000051/uni00000057/uni0000004b/uni00000048/uni00000056/uni0000004c/uni00000056/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000056/uni00000057/uni00000048/uni00000051/uni00000046/uni0000004c/uni0000004f/uni00000016/uni00000047/uni0000002c/uni00000051/uni0000004c/uni00000057/uni0000004c/uni00000044/uni0000004f/uni00000003/uni00000003/uni00000003/uni00000036/uni00000044/uni00000050/uni00000053/uni0000004f/uni0000004c/uni00000051/uni0000004a /uni00000015/uni00000018 /uni00000018/uni00000013 /uni0000001a/uni00000018/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000014/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000016/uni00000013/uni00000011/uni00000017/uni00000059/uni0000004c/uni00000057/uni00000048/uni00000055/uni00000045/uni0000004c/uni0000002c/uni00000051/uni0000004c/uni00000057/uni0000004c/uni00000044/uni0000004f/uni00000003/uni00000003/uni00000003/uni00000036/uni00000044/uni00000050/uni00000053/uni0000004f/uni0000004c/uni00000051/uni0000004a /uni00000015/uni00000018 /uni00000018/uni00000013 /uni0000001a/uni00000018/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000056/uni0000004b/uni00000044/uni0000002c/uni00000051/uni0000004c/uni00000057/uni0000004c/uni00000044/uni0000004f/uni00000003/uni00000003/uni00000003/uni00000036/uni00000044/uni00000050/uni00000053/uni0000004f/uni0000004c/uni00000051/uni0000004a /uni00000015/uni00000018 /uni00000018/uni00000013 /uni0000001a/uni00000018/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000014/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000016/uni00000013/uni00000011/uni00000017/uni00000044/uni00000058/uni00000057/uni00000052/uni00000046/uni00000052/uni00000055/uni00000055/uni0000002c/uni00000051/uni0000004c/uni00000057/uni0000004c/uni00000044/uni0000004f/uni00000003/uni00000003/uni00000003/uni00000036/uni00000044/uni00000050/uni00000053/uni0000004f/uni0000004c/uni00000051/uni0000004a/uni00000031/uni00000036/uni0000002a/uni00000024/uni00000010 /uni00000010/uni00000035/uni00000036 /uni00000031/uni00000036/uni0000002a/uni00000024/uni00000010 /uni00000010/uni00000025/uni00000036 /uni00000031/uni00000036/uni0000002a/uni00000024/uni00000010 /uni00000010/uni0000002f/uni0000002b/uni00000036 /uni00000031/uni00000036/uni0000002a/uni00000024/uni00000010 /uni00000010/uni0000003a/uni00000044/uni00000055/uni00000050/uni00000010/uni00000036/uni00000057/uni00000044/uni00000055/uni00000057 /uni00000030/uni00000032/uni00000028/uni00000024/uni00000012/uni00000027/uni00000010/uni00000035/uni00000036 /uni00000030/uni00000032/uni00000028/uni00000024/uni00000012/uni00000027/uni00000010/uni00000025/uni00000036 /uni00000030/uni00000032/uni00000028/uni00000024/uni00000012/uni00000027/uni00000010/uni0000002f/uni0000002b/uni00000036 /uni00000030/uni00000032/uni00000028/uni00000024/uni00000012/uni00000027/uni00000010/uni0000003a/uni00000044/uni00000055/uni00000050/uni00000010/uni00000036/uni00000057/uni00000044/uni00000055/uni00000057 Figure 5: Comparison among EA-based DSE methods under different initial sampling designs. Figure 4 further illustrates the advantage of iDSE in multiobjective optimization, establishing more comprehensive and concave Pareto fronts with fewer search budgets (under 50 explored designs per benchmark). Table 2 compares the number of explored designs required by different DSE methods to achieve the target ADRS. iDSE significantly enhances exploration efficiency, Table 2: Comparison of search budgets for target ADRS across different benchmarks. DSE Methods Poly. Mach. CHS. Speedup NSGA-II 108 108 108 1× ACO 41 68 83 3.31× MOEA/D 22 61 38 3.56× Lattice 89 55 72 2.22× HGBO-DSE 8 9 74 14.34× iDSE 4 5 7 25.07×achieving
https://arxiv.org/abs/2505.22086v1
a geometric mean speedup of 11.0 ×over meta-heuristic-based DSE methods and 4.4 ×over SOTA DSE methods. The notable improvement in performance and efficiency of iDSE stems from the dual effectiveness of initial high-quality sampling and intelligently guided searches. LLM accelerates con- vergence to Pareto-optimal designs that satisfy diverse optimization preferences by prudent directive combi- nation scheduling and parameter tuning. 5.2 DSE Warm-Start: Robust Sampling Initialization for Convergence Acceleration Heuristic-based DSE approaches often suffer from reduced convergence capabilities due to the lack of diverse and insightful initial sampling designs, which limits their ability to identify effective optimization directions. In contrast, Lattice uses U-shaped Beta sampling to strategically sample the boundaries of the design space, enabling more sensible cluster-based exploration near reference Pareto fronts. Therefore, we propose that initial sampling quality is a critical determinant of both overall DSE effectiveness and the convergence toward optimal designs. To validate our hypothesis and assess the efficacy of the Seed Directive Generation (Warm-Start) method, we conducted comparative analyses against Random Sampling (RS), U-shaped Beta Sampling (BS), and Latin Hypercube Sampling (LHS) within heuristic-based DSE methods. As shown in Table 3, ADRS drops substantially when seed designs more accurately approximate the reference Pareto fronts. Both BS and LHS methods 8 consistently outperform RS across NSGA-II, ACO, and MOEA/D. By reasoning about HLS design structure and viable optimization space, LLM optimally constructs superior approximate Pareto fronts within limited explored designs. When paired with traditional evolutionary algorithms (EA) as search engines, our method achieves sampling effects that exceed RS by 2.5 ×, BS by 1.8 ×, and LHS by 1.6×. For more details, please refer to Appendix D.2. Figure 5 depicts the ADRS descent curve, showing that Warm-Start converges more rapidly towards reference Pareto fronts, achieving lower ADRS compared to probability distribution-based alternatives. The subsequent EA-based DSE further recovers entire Pareto fronts. More insightfully, we observed that the improved initial sampling quality produced minimal improvements for ACO. This is probably attributed to the reliance on dynamic updates during the iteration process, which gradually overrides initial path advantages through evaporation and reinforcement processes, thereby diminishing the influence of initially superior solutions (detailed evidence in Appendix D.3). In contrast, EA-based methods explicitly preserve high-quality genes, allowing the quality of seed designs to persistently guide search directions and yield more substantial improvements. 5.3 Ablation Study: Infusing Optimization Intuition into Directive Configuration Tuning Traditional EAs, lacking domain expertise, struggle to capture the nuanced relationship of optimiza- tion directives applied to specific HLS designs. We enable more sensible directive configuration tuning by introducing LLM to infuse hardware optimization intuition. To evaluate our approach, we conducted ablation experiments with the following settings: •Baseline: We prompted LLM with the HLS design and the corresponding QoR to generate directive combinations and parameters different from the initial sampling designs. •S1: We incorporated information of the pruned design space to constrain exploration within reasonable boundaries, examining the effectiveness of pruning strategies on DSE. •S2:We analyzed optimization trajectories of selected parents to prompt LLM to generate novel designs, examining the benefits of the optimization trajectory reflection on the subsequent search. •S3:We
https://arxiv.org/abs/2505.22086v1
further integrated bottleneck analysis and design refactoring operators, examining the enhancement effects of domain-specific hardware optimization knowledge on DSE. CHStone Overall0PolyBench MachSuite12345Relative ADRS Improv.Baseline S1 S2 S3 4.0×5.3× 3.6×4.4× Figure 6: Ablation of Adaptive Optimization.The geometric mean improvement trend of ADRS in Figure 6 demonstrates that, given comparable search budgets, the proposed QoR-Aware Adaptive Optimiza- tion (S3) system effectively leverages prior knowl- edge for more efficient design space exploration. Without proper perception of design space dimen- sions ( Baseline ), aggressive exploration often encoun- ters invalid designs, comprising 12.4% of total de- signs, degrading the efficiency of DSE. Constraining the exploration space within the ranges established by our Feature-Driven Pruning (S1) significantly eliminated 89.9% invalid designs. Furthermore, in- troducing optimization trajectory reflection (S2) improved the effectiveness of optimization by 20.5%. Injecting domain-specific knowledge into the search process (S3) further improved the effectiveness by an additional 45.0%. Additionally, our approach maintains prompt interfaces that enable further incorporation of hardware optimization preferences and resource constraints for flexible outputs. 6 Conclusion This paper presents iDSE, an effective and efficient LLM-navigated framework for automated high-level synthesis design space exploration, addressing the challenges of time-consuming hard- ware optimization with steep expertise barriers. Our approach leverages LLMs to enable elegant identification of HLS design bottlenecks and exploitation of optimization opportunities, construct representative initial sampling designs and expedite subsequent refinement within pruned design spaces. Experimental results substantiate that iDSE markedly outperforms traditional heuristic-based DSE methods, achieving superior approximation to the Pareto front with substantially reduced search budgets. Furthermore, iDSE rapidly converges to more comprehensive and compelling Pareto-optimal designs, expanding the boundary of HLS advantages in agile hardware development and optimization. 9 References [1]Norman P. Jouppi, Doe Hyun Yoon, Matthew Ashcraft, Mark Gottscho, Thomas B. Jablin, George Kurian, James Laudon, Sheng Li, Peter Ma, Ma, et al. Ten lessons from three generations shaped Google’s TPUv4i : Industrial product. In 2021 ACM/IEEE 48th Annual International Symposium on Computer Architecture (ISCA) , pages 1–14, 2021. [2]Norman P. Jouppi, Cliff Young, Nishant Patil, David Patterson, Gaurav Agrawal, Raminder Bajwa, Sarah Bates, Suresh Bhatia, Nan Boden, Borchers, et al. In-datacenter performance analysis of a tensor processing unit. In 2017 ACM/IEEE 44th Annual International Symposium on Computer Architecture (ISCA) , pages 1–12, 2017. [3]Yu-Hsin Chen, Tushar Krishna, Joel S. Emer, and Vivienne Sze. Eyeriss: An energy-efficient reconfigurable accelerator for deep convolutional neural networks. IEEE Journal of Solid-State Circuits , 52(1):127–138, 2017. [4]Ashish Gondimalla, Noah Chesnut, Mithuna Thottethodi, and T. N. Vijaykumar. SparTen: A sparse tensor accelerator for convolutional neural networks. In Proceedings of the 52nd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO) , page 151–165, 2019. [5]Sixu Li, Yang Zhao, Chaojian Li, Bowei Guo, Jingqun Zhang, Wenbo Zhu, Zhifan Ye, Cheng Wan, and Yingyan Celine Lin. Fusion-3D: Integrated acceleration for instant 3D reconstruction and real-time rendering. In 2024 57th IEEE/ACM International Symposium on Microarchitecture (MICRO) , pages 78–91, 2024. [6]Dongseok Im and Hoi-Jun Yoo. CamPU: A multi-camera processing unit for deep learning- based 3D spatial computing systems. In 2024 57th IEEE/ACM International Symposium on Microarchitecture (MICRO) , pages 50–63, 2024. [7]Sabrina M. Neuman, Radhika Ghosal,
https://arxiv.org/abs/2505.22086v1
Thomas Bourgeat, Brian Plancher, and Vijay Janapa Reddi. RoboShape: Using topology patterns to scalably and flexibly deploy accelerators across robots. In Proceedings of the 50th Annual International Symposium on Computer Architecture (ISCA) , 2023. [8]Sabrina M. Neuman, Brian Plancher, Thomas Bourgeat, Thierry Tambe, Srinivas Devadas, and Vijay Janapa Reddi. Robomorphic computing: a design methodology for domain-specific accelerators parameterized by robot morphology. In Proceedings of the 26th ACM International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS) , page 674–686, 2021. [9]Seunghee Han, Seungjae Moon, Teokkyu Suh, JaeHoon Heo, and Joo-Young Kim. BLESS: Bandwidth and locality enhanced smem seeding acceleration for DNA sequencing. In 2024 ACM/IEEE 51st Annual International Symposium on Computer Architecture (ISCA) , pages 582–596, 2024. [10] Julian Pavon, Ivan Vargas Valdivieso, Carlos Rojas, Cesar Hernandez, Mehmet Aslan, Roger Figueras, Yichao Yuan, Joël Lindegger, Mohammed Alser, Moll, et al. QUETZAL: Vector acceleration framework for modern genome sequence analysis algorithms. In 2024 ACM/IEEE 51st Annual International Symposium on Computer Architecture (ISCA) , pages 597–612, 2024. [11] Max Doblas, Oscar Lostes-Cazorla, Quim Aguado-Puig, Nick Cebry, Pau Fontova-Musté, Christopher Frances Batten, Santiago Marco-Sola, and Miquel Moretó. GMX: Instruction set extensions for fast, scalable, and efficient genome sequence alignment. In Proceedings of the 56th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO) , page 1466–1480, 2023. [12] Damla Senol Cali, Konstantinos Kanellopoulos, Joël Lindegger, Zülal Bingöl, Gurpreet S. Kalsi, Ziyi Zuo, Can Firtina, Meryem Banu Cavlak, Jeremie Kim, Nika Mansouri Ghiasi, Singh, et al. SeGraM: a universal hardware accelerator for genomic sequence-to-graph and sequence-to-sequence mapping. In Proceedings of the 49th Annual International Symposium on Computer Architecture (ISCA) , page 638–655, 2022. [13] Hadi Esmaeilzadeh, Emily Blem, Renee St. Amant, Karthikeyan Sankaralingam, and Doug Burger. Dark silicon and the end of multicore scaling. IEEE Micro , 32(3):122–134, 2012. 10 [14] Yuze Chi, Weikang Qiao, Atefeh Sohrabizadeh, Jie Wang, and Jason Cong. Democratizing domain-specific computing. Communications of the ACM , 66(1):74–85, 2022. [15] Jason Cong, Bin Liu, Stephen Neuendorffer, Juanjo Noguera, Kees Vissers, and Zhiru Zhang. High-Level Synthesis for FPGAs: From prototyping to deployment. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems , 30(4):473–491, 2011. [16] Jason Cong, Jason Lau, Gai Liu, Stephen Neuendorffer, Peichen Pan, Kees Vissers, and Zhiru Zhang. FPGA HLS today: Successes, challenges, and opportunities. ACM Transactions on Reconfigurable Technology and Systems , 15(4), 2022. [17] Benjamin Carrion Schafer, Takashi Takenaka, and Kazutoshi Wakabayashi. Adaptive simulated annealer for high level synthesis design space exploration. In 2009 International Symposium on VLSI Design, Automation and Test , pages 106–109, 2009. [18] Benjamin Carrion Schafer. Parallel high-level synthesis design space exploration for behav- ioral IPs of exact latencies. ACM Transactions on Design Automation of Electronic Systems (TODAES) , 22(4), 2017. [19] Benjamin Carrion Schafer. Probabilistic multiknob high-level synthesis design space exploration acceleration. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems , 35(3):394–406, 2016. [20] Qi Sun, Tinghuan Chen, Siting Liu, Jianli Chen, Hao Yu, and Bei Yu. Correlated multi-objective multi-fidelity optimization for HLS directives design. ACM Transactions on Design Automation of Electronic Systems (TODAES) , 27(4), 2022. [21] Lorenzo Ferretti, Giovanni
https://arxiv.org/abs/2505.22086v1
Ansaloni, and Laura Pozzi. Lattice-traversing design space explo- ration for high level synthesis. In 2018 IEEE 36th International Conference on Computer Design (ICCD) , pages 210–217, 2018. [22] Cody Hao Yu, Peng Wei, Max Grossman, Peng Zhang, Vivek Sarker, and Jason Cong. S2FA: An accelerator automation framework for heterogeneous computing in datacenters. In 2018 55th ACM/ESDA/IEEE Design Automation Conference (DAC) , pages 1–6, 2018. [23] Yunsheng Bai, Atefeh Sohrabizadeh, Zongyue Qin, Ziniu Hu, Yizhou Sun, and Jason Cong. Towards a comprehensive benchmark for high-level synthesis targeted to FPGAs. In Proceedings of the 37th International Conference on Neural Information Processing Systems , 2023. [24] Atefeh Sohrabizadeh, Yunsheng Bai, Yizhou Sun, and Jason Cong. Automated accelerator optimization aided by graph neural networks. In Proceedings of the 59th ACM/IEEE Design Automation Conference (DAC) , page 55–60, 2022. [25] Atefeh Sohrabizadeh, Yunsheng Bai, Yizhou Sun, and Jason Cong. Robust GNN-Based representation learning for HLS. In 2023 IEEE/ACM International Conference on Computer Aided Design (ICCAD) , pages 1–9, 2023. [26] Nan Wu, Yuan Xie, and Cong Hao. IronMan-Pro: Multiobjective design space exploration in HLS via reinforcement learning and graph neural network-based modeling. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems , 42(3):900–913, 2023. [27] Jieru Zhao, Liang Feng, Sharad Sinha, Wei Zhang, Yun Liang, and Bingsheng He. COMBA: A comprehensive model-based analysis framework for high level synthesis of real applications. In2017 IEEE/ACM International Conference on Computer-Aided Design (ICCAD) , pages 430–437, 2017. [28] Guanwen Zhong, Alok Prakash, Yun Liang, Tulika Mitra, and Smail Niar. Lin-Analyzer: A high- level performance analysis tool for FPGA-based accelerators. In 2016 53nd ACM/EDAC/IEEE Design Automation Conference (DAC) , pages 1–6, 2016. [29] Lorenzo Ferretti, Andrea Cini, Georgios Zacharopoulos, Cesare Alippi, and Laura Pozzi. Graph neural networks for high-level synthesis design space exploration. ACM Transactions on Design Automation of Electronic Systems (TODAES) , 28(2), 2022. 11 [30] Yunsheng Bai, Atefeh Sohrabizadeh, Zijian Ding, Rongjian Liang, Weikai Li, Ding Wang, Haoxing Ren, Yizhou Sun, and Jason Cong. Learning to compare hardware designs for high- level synthesis. In 2024 ACM/IEEE 6th Symposium on Machine Learning for CAD (MLCAD) , pages 1–7, 2024. [31] Huizhen Kuang, Xianfeng Cao, Jingyuan Li, and Lingli Wang. HGBO-DSE: Hierarchical gnn and bayesian optimization based hls design space exploration. In 2023 International Conference on Field Programmable Technology (ICFPT) , pages 106–114, 2023. [32] Xi Wang, Gwok-Waa Wan, Sam-Zaak Wong, Layton Zhang, Tianyang Liu, Qi Tian, and Jianmin Ye. ChatCPU: An agile CPU design and verification platform with LLM. In Proceedings of the 61st ACM/IEEE Design Automation Conference (DAC) , 2024. [33] Ke Xu, Jialin Sun, Yuchen Hu, Xinwei Fang, Weiwei Shan, Xi Wang, and Zhe Jiang. MEIC: Re-thinking RTL debug automation using LLMs. In Proceedings of the 43rd IEEE/ACM International Conference on Computer-Aided Design (ICCAD) , 2025. [34] Haoyuan Wu, Zhuolun He, Xinyun Zhang, Xufeng Yao, Su Zheng, Haisheng Zheng, and Bei Yu. ChatEDA: A large language model powered autonomous agent for EDA. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems , 43(10):3184–3197, 2024. [35] Yunda Tsai, Mingjie Liu, and Haoxing Ren. RTLFixer: Automatically fixing RTL syntax
https://arxiv.org/abs/2505.22086v1
errors with large language model. In Proceedings of the 61st ACM/IEEE Design Automation Conference (DAC) , 2024. [36] Yonggan Fu, Yongan Zhang, Zhongzhi Yu, Sixu Li, Zhifan Ye, Chaojian Li, Cheng Wan, and Yingyan Celine Lin. GPT4AIGChip: Towards next-generation AI accelerator design automation via large language models. In 2023 IEEE/ACM International Conference on Computer Aided Design (ICCAD) , pages 1–9, 2023. [37] Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V Le, Denny Zhou, and Xinyun Chen. Large language models as optimizers. arXiv preprint arXiv:2309.03409 , 2023. [38] Shengcai Liu, Caishun Chen, Xinghua Qu, Ke Tang, and Yew-Soon Ong. Large language models as evolutionary optimizers. In 2024 IEEE Congress on Evolutionary Computation (CEC) , pages 1–8, 2024. [39] Beichen Huang, Xingyu Wu, Yu Zhou, Jibin Wu, Liang Feng, Ran Cheng, and Kay Chen Tan. Exploring the true potential: Evaluating the black-box optimization capability of large language models. arXiv preprint arXiv:2404.06290 , 2024. [40] Tennison Liu, Nicolás Astorga, Nabeel Seedat, and Mihaela van der Schaar. Large language models to enhance bayesian optimization. arXiv preprint arXiv:2402.03921 , 2024. [41] Xingyu Wu, Sheng-Hao Wu, Jibin Wu, Liang Feng, and Kay Chen Tan. Evolutionary com- putation in the era of large language model: Survey and roadmap. IEEE Transactions on Evolutionary Computation , 29(2):534–554, 2025. [42] Zijian Ding, Atefeh Sohrabizadeh, Weikai Li, Zongyue Qin, Yizhou Sun, and Jason Cong. Efficient task transfer for HLS DSE. In Proceedings of the 43rd IEEE/ACM International Conference on Computer-Aided Design (ICCAD) , 2025. [43] Atefeh Sohrabizadeh, Cody Hao Yu, Min Gao, and Jason Cong. AutoDSE: Enabling software programmers to design efficient FPGA accelerators. ACM Transactions on Design Automation of Electronic Systems (TODAES) , 27(4), 2022. [44] Guanwen Zhong, Vanchinathan Venkataramani, Yun Liang, Tulika Mitra, and Smail Niar. Design space exploration of multiple loops on fpgas using high level synthesis. In 2014 IEEE 32nd International Conference on Computer Design (ICCD) , pages 456–463, 2014. [45] Nam Khanh Pham, Amit Kumar Singh, Akash Kumar, and Mi Mi Aung Khin. Exploiting loop-array dependencies to accelerate the design space exploration with high level synthesis. In 2015 Design, Automation & Test in Europe Conference & Exhibition (DATE) , pages 157–162, 2015. 12 [46] Stéphane Pouget, Louis-Noël Pouchet, and Jason Cong. A unified framework for automated code transformation and pragma insertion. In Proceedings of the 2025 ACM/SIGDA International Symposium on Field Programmable Gate Arrays (FPGA) , page 187–198, 2025. [47] Stéphane Pouget, Louis-Noël Pouchet, and Jason Cong. Automatic hardware pragma insertion in high-level synthesis: A non-linear programming approach. ACM Transactions on Design Automation of Electronic Systems (TODAES) , 30(2), 2025. [48] Nan Wu, Hang Yang, Yuan Xie, Pan Li, and Cong Hao. High-level synthesis performance prediction using GNNs: benchmarking, modeling, and advancing. In Proceedings of the 59th ACM/IEEE Design Automation Conference (DAC) , page 49–54, 2022. [49] Ruizhe Zhong, Xingbo Du, Shixiong Kai, Zhentao Tang, Siyuan Xu, Hui-Ling Zhen, Jianye Hao, Qiang Xu, Mingxuan Yuan, and Junchi Yan. LLM4EDA: Emerging progress in large language models for electronic design automation. arXiv preprint arXiv:2401.12224 , 2023. [50] Zehua Pei, Hui-Ling Zhen, Mingxuan Yuan, Yu Huang, and Bei Yu.
https://arxiv.org/abs/2505.22086v1
Betterv: controlled verilog generation with discriminative guidance. In Proceedings of the 41st International Conference on Machine Learning (ICML) , 2024. [51] Bingkun Yao, Ning Wang, Jie Zhou, Xi Wang, Hong Gao, Zhe Jiang, and Nan Guan. Location is key: Leveraging large language model for functional bug localization in verilog. arXiv preprint arXiv:2409.15186 , 2024. [52] Yuchen Hu, Junhao Ye, Ke Xu, Jialin Sun, Shiyue Zhang, Xinyao Jiao, Dingrong Pan, Jie Zhou, Ning Wang, Weiwei Shan, Xinwei Fang, Xi Wang, Nan Guan, and Zhe Jiang. UVLLM: An automated universal RTL verification framework using LLMs. arXiv preprint arXiv:2411.16238 , 2024. [53] Sam-Zaak Wong, Gwok-Waa Wan, Dongping Liu, and Xi Wang. VGV: Verilog generation using visual capabilities of multi-modal large language models. In 2024 IEEE LLM Aided Design Workshop (LAD) , pages 1–5, 2024. [54] Shailja Thakur, Baleegh Ahmad, Hammond Pearce, Benjamin Tan, Brendan Dolan-Gavitt, Ramesh Karri, and Siddharth Garg. VeriGen: A large language model for verilog code gen- eration. ACM Transactions on Design Automation of Electronic Systems (TODAES) , 29(3), 2024. [55] Yuxuan Yin, Yu Wang, Boxun Xu, and Peng Li. ADO-LLM: Analog design bayesian optimiza- tion with in-context learning of large language models. In Proceedings of the 43rd IEEE/ACM International Conference on Computer-Aided Design (ICCAD) , 2025. [56] Dimple Vijay Kochar, Hanrui Wang, Anantha Chandrakasan, and Xin Zhang. LEDRO: LLM-Enhanced design space reduction and optimization for analog circuits. arXiv preprint arXiv:2411.12930 , 2024. [57] Ning Wang, Bingkun Yao, Jie Zhou, Yuchen Hu, Xi Wang, Nan Guan, and Zhe Jiang. VeriDebug: A unified LLM for verilog debugging via contrastive embedding and guided correction. arXiv preprint arXiv:2504.19099 , 2025. [58] Ning Wang, Bingkun Yao, Jie Zhou, Yuchen Hu, Xi Wang, Nan Guan, and Zhe Jiang. Insights from verification: Training a verilog generation LLM with reinforcement learning with testbench feedback. arXiv preprint arXiv:2504.15804 , 2025. [59] Kangwei Xu, Grace Li Zhang, Xunzhao Yin, Cheng Zhuo, Ulf Schlichtmann, and Bing Li. Automated C/C++ program repair for high-level synthesis via large language models. In Proceedings of the 2024 ACM/IEEE International Symposium on Machine Learning for CAD (MLCAD) , 2024. [60] Chenwei Xiong, Cheng Liu, Huawei Li, and Xiaowei Li. HLSPilot: LLM-based high-level synthesis. In Proceedings of the 43rd IEEE/ACM International Conference on Computer-Aided Design (ICCAD) , 2025. 13 [61] Haocheng Xu, Haotian Hu, and Sitao Huang. Optimizing high-level synthesis designs with retrieval-augmented large language models. In 2024 IEEE LLM Aided Design Workshop (LAD) , pages 1–5, 2024. [62] Neha Prakriya, Zijian Ding, Yizhou Sun, and Jason Cong. LIFT: Llm-based pragma insertion for HLS via GNN supervised fine-tuning. arXiv preprint arXiv:2504.21187 , 2025. [63] Hanyu Wang, Xinrui Wu, Zijian Ding, Su Zheng, Chengyue Wang, Tony Nowatzki, Yizhou Sun, and Jason Cong. LLM-DSE: Searching accelerator parameters with LLM agents, 2025. [64] Luca Collini, Siddharth Garg, and Ramesh Karri. C2HLSC: Leveraging large language models to bridge the software-to-hardware design gap. arXiv preprint arXiv:2412.00214 , 2024. [65] Luca Collini, Andrew Hennessee, Ramesh Karri, and Siddharth Garg. Can reasoning models reason about hardware? an agentic HLS perspective. arXiv preprint arXiv:2503.12721 , 2025. [66] Xi Lin, Hui-Ling Zhen, Zhenhua Li, Qing-Fu Zhang, and Sam
https://arxiv.org/abs/2505.22086v1
Kwong. Pareto multi-task learning. In Advances in Neural Information Processing Systems , volume 32, 2019. [67] Xingchao Liu, Xin Tong, and Qiang Liu. Profiling pareto front with multi-objective stein variational gradient descent. In Advances in Neural Information Processing Systems , volume 34, pages 14721–14733, 2021. [68] Qingfu Zhang and Hui Li. MOEA/D: A multiobjective evolutionary algorithm based on decomposition. IEEE Transactions on Evolutionary Computation , 11(6):712–731, 2007. [69] Aviv Navon, Aviv Shamsian, Ethan Fetaya, and Gal Chechik. Learning the pareto front with hypernetworks. In International Conference on Learning Representations (ICLR) , 2021. [70] Xi Lin, Zhiyuan Yang, Xiaoyuan Zhang, and Qingfu Zhang. Pareto set learning for expensive multi-objective optimization. In Proceedings of the 36th International Conference on Neural Information Processing Systems , 2022. [71] Xiaoyuan Zhang, Xi Lin, Bo Xue, Yifan Chen, and Qingfu Zhang. Hypervolume maximization: a geometric view of pareto set learning. In Proceedings of the 37th International Conference on Neural Information Processing Systems , 2023. [72] Yifan Zhong, Chengdong Ma, Xiaoyuan Zhang, Ziran Yang, Haojun Chen, Qingfu Zhang, Siyuan Qi, and Yaodong Yang. Panacea: Pareto alignment via preference adaptation for LLMs. InAdvances in Neural Information Processing Systems , volume 37, pages 75522–75558, 2024. [73] Seung Hyun Lee, Yinxiao Li, Junjie Ke, Innfarn Yoo, Han Zhang, Jiahui Yu, Qifei Wang, Fei Deng, Glenn Entis, Junfeng He, Gang Li, Sangpil Kim, Irfan Essa, and Feng Yang. Parrot: Pareto-optimal multi-reward reinforcement learning framework for text-to-image generation. In European Conference on Computer Vision (ECCV) , page 462–478, 2024. [74] Fei Liu, Xialiang Tong, Mingxuan Yuan, Xi Lin, Fu Luo, Zhenkun Wang, Zhichao Lu, and Qingfu Zhang. Evolution of heuristics: towards efficient automatic algorithm design using large language model. In Proceedings of the 41st International Conference on Machine Learning (ICML) , 2024. [75] Bernardino Romera-Paredes, Mohammadamin Barekatain, Alexander Novikov, Matej Balog, M Pawan Kumar, Emilien Dupont, Francisco JR Ruiz, Jordan S Ellenberg, Pengming Wang, Omar Fawzi, et al. Mathematical discoveries from program search with large language models. Nature , 625(7995):468–475, 2024. [76] Zhi Zheng, Zhuoliang Xie, Zhenkun Wang, and Bryan Hooi. Monte carlo tree search for compre- hensive exploration in llm-based automatic heuristic design. arXiv preprint arXiv:2501.08603 , 2025. [77] Shunyu Yao, Fei Liu, Xi Lin, Zhichao Lu, Zhenkun Wang, and Qingfu Zhang. Multi-objective evolution of heuristic using large language model. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 39, pages 27144–27152, 2025. 14 [78] Haoran Ye, Jiarui Wang, Zhiguang Cao, Federico Berto, Chuanbo Hua, Haeyeon Kim, Jinkyoo Park, and Guojie Song. Reevo: Large language models as hyper-heuristics with reflective evolution. arXiv preprint arXiv:2402.01145 , 2024. [79] Angelica Chen, David Dohan, and David So. EvoPrompting: Language models for code-level neural architecture search. In Advances in Neural Information Processing Systems , volume 36, pages 7787–7817, 2023. [80] Caiyang Yu, Xianggen Liu, Yifan Wang, Yun Liu, Wentao Feng, Xiong Deng, Chenwei Tang, and Jiancheng Lv. GPT-NAS: Neural architecture search meets generative pre-trained transformer model. Big Data Mining and Analytics , 8(1):45–64, 2025. [81] Muhammad Umair Nasir, Sam Earle, Julian Togelius, Steven James, and Christopher Cleghorn. LLMatic: neural architecture search via large language models
https://arxiv.org/abs/2505.22086v1
and quality diversity optimization. Inproceedings of the Genetic and Evolutionary Computation Conference , pages 1110–1118, 2024. [82] Haishuai Wang, Yang Gao, Xin Zheng, Peng Zhang, Hongyang Chen, Jiajun Bu, and Philip S Yu. Graph neural architecture search with gpt-4. arXiv preprint arXiv:2310.01436 , 2023. [83] Clint Morris, Michael Jurado, and Jason Zutty. LLM guided evolution-the automation of models advancing models. In Proceedings of the Genetic and Evolutionary Computation Conference , pages 377–384, 2024. [84] Xun Zhou, Xingyu Wu, Liang Feng, Zhichao Lu, and Kay Chen Tan. Design principle transfer in neural architecture search via large language models. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 39, pages 23000–23008, 2025. [85] Lei Xu, Shanshan Wang, Emmanuel Casseau, and Chenglong Xiao. Intelligent4DSE: Optimiz- ing high-level synthesis design space exploration with graph neural networks and large language models. arXiv preprint arXiv:2504.19649 , 2025. [86] Benjamin Carrion Schafer and Zi Wang. High-level synthesis design space exploration: Past, present, and future. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems , 39(10):2628–2639, 2020. [87] Lorenzo Ferretti, Jihye Kwon, Giovanni Ansaloni, Giuseppe Di Guglielmo, Luca P. Carloni, and Laura Pozzi. Leveraging prior knowledge for effective design-space exploration in high-level synthesis. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems , 39(11):3736–3747, 2020. [88] Aggelos Ferikoglou, Andreas Kakolyris, Dimosthenis Masouros, Dimitrios Soudris, and Sotirios Xydis. CollectiveHLS: A collaborative approach to high-level synthesis design optimization. ACM Transactions on Reconfigurable Technology and Systems , 18(1), 2024. [89] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation , 6(2):182–197, 2002. [90] Louis-Noel Pouchet and Tomofumi Yuki. PolyBench/C 4.2, 2016. Available at: http:// polybench.sf.net . [91] Yuko Hara, Hiroyuki Tomiyama, Shinya Honda, Hiroaki Takada, and Katsuya Ishii. CHStone: A benchmark program suite for practical C-based high-level synthesis. In 2008 IEEE International Symposium on Circuits and Systems (ISCAS) , pages 1192–1195, 2008. [92] Brandon Reagen, Robert Adolf, Yakun Sophia Shao, Gu-Yeon Wei, and David Brooks. Mach- suite: Benchmarks for accelerator design and customized architectures. In 2014 IEEE Interna- tional Symposium on Workload Characterization (IISWC) , pages 110–119. IEEE, 2014. [93] Vitis. Vitis High-Level Synthesis User Guide (UG1399), 2022. Available at: https://docs. amd.com/r/2022.1-English/ug1399-vitis-hls . [94] Marco Dorigo, Mauro Birattari, and Thomas Stutzle. Ant colony optimization. IEEE Computa- tional Intelligence Magazine , 1(4):28–39, 2006. 15 Appendix Contents A iDSE Implementation 17 A.1 Directive Function Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 A.2 Specific Pruning Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 A.3 Screening Mechanism for Designs with Optimization Potential . . . . . . . . . . . 18 A.4 iDSE Hyperparameter Configurations . . . . . . . . . . . . . . . . . . . . . . . . 18
https://arxiv.org/abs/2505.22086v1
B Prompt Engineering 19 B.1 Prompt for Feature Extractor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 B.2 Prompt for Feature-Driven Pruning . . . . . . . . . . . . . . . . . . . . . . . . . . 19 B.3 Prompt for Seed Directive Generation . . . . . . . . . . . . . . . . . . . . . . . . 20 B.4 Prompt for QoR-Aware Adaptive Optimization . . . . . . . . . . . . . . . . . . . 21 C Experiment Details 23 C.1 Evaluation Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 C.2 Benchmark Descriptions and Design Structures . . . . . . . . . . . . . . . . . . . 24 C.3 Heuristic-Based DSE Configurations . . . . . . . . . . . . . . . . . . . . . . . . . 24 D Supplementary Results 24 D.1 Determination of Initial Sample Sizes . . . . . . . . . . . . . . . . . . . . . . . . 24 D.2 Construction of Pareto Front through Initial Single-Batch Sampling . . . . . . . . 25 D.3 Impact of Initial Sampling and Subsequent Search on Heuristic-Based DSE . . . . 26 D.4 Analysis of LLM Performance in DSE . . . . . . . . . . . . . . . . . . . . . . . . 28 D.5 Information of Assets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 E Limitations and Future Work 31 16 A iDSE Implementation A.1 Directive Function Description iDSE supports three optimization directive types in Vitis HLS [ 93],PIPELINE andUNROLL for loops, andARRAY_PARTITION for arrays. This can be generalized to other vendor HLS tools that support customizing microarchitectures through optimization directive tuning. Table 4 summarizes these optimization directive configurations and feature vectors generated for the case in Figure 2. Activating PIPELINE can boost throughput by overlapping time domains, while the UNROLL factor determines hardware loop parallelism. Meanwhile, ARRAY_PARTITION splits arrays to provide sufficient data bandwidth for unrolled loops. We encode PIPELINE andARRAY_PARTITION types as discrete integer vectors, then combine these with unroll and partition factor vectors. Constructing feature vectors in the form of tuples by selecting specific numerical values from these directive configurations helps reduce unnecessary HLS domain-specific semantics in LLM reasoning. Table 4: Optimization Directive Configuration and Generated Feature Vectors
https://arxiv.org/abs/2505.22086v1
Example Structure Optimization Directive Config. Examples of Feature Vector LoopPIPELINE "off", "on"{‘name’: ‘mul’, ‘pipeline’: 1, ‘unroll’: 2}UNROLL integer Array ARRAY_PARTITION"complete", "block", "cyclic"{‘name’: ‘C’, ‘type’: 2, ‘dim’: 1, ‘factor’: 2}integer 1# Project Setup 2open_project vector_mul 3add_files vector_mul.cpp 4set_top vector_mul 5 6# Solution Configuration 7open_solution solution 8set_part { xczu7ev-ffvc1156-2-e } 9create_clock -period 10 -name default 10 11# Array Partition Directives 12set_directive_array_partition -type cyclic -factor 2 \ 13 -dim 1 " vector_mul " A 14set_directive_array_partition -type cyclic -factor 2 \ 15 -dim 1 " vector_mul " B 16set_directive_array_partition -type cyclic -factor 2 \ 17 -dim 1 " vector_mul " C 18 19# Loop Pipeline Directives 20set_directive_pipeline " vector_mul /mul" 21 22# Loop Unroll Directives 23set_directive_unroll -factor 2 " vector_mul /mul" 24 25# HLS Synthesis 26csynth_design 27exit Listing 1: Vitis HLS Project Configuration for Vector Hadamard Product Listing 1 presents an automatically generated Tcl script within iDSE workflow, designed to optimize the vector Hadamard product HLS design shown in Figure 2. Lines 12-17, 20, 23 correspond to the optimization directive insertion scripts generated from the example feature vectors in Table 4. Specifically, line 20 activates pipeline optimization for the mulloop, while line 23 implements a factor-2 parallel unrolling for this same loop. Lines 12-17 perform array partitioning on array A, B, C , utilizing a Cyclic partitioning strategy with a factor of 2 to ensure alignment with the data access throughput in the loop. Within the ARRAY_PARTITION directive, Cyclic creates smaller arrays by interleaving elements from the original array, while Block creates smaller arrays from consecutive blocks of the original array. 17 A.2 Specific Pruning Strategies Feature-Driven Pruning method addresses the critical challenge of inefficient design space explo- ration by judiciously eliminating redundant or impractical directives, particularly when aggressive parallelism directives are configured within large design footprints. A primary consideration involves loop structure analysis, particularly for nested loops where indivisible loop boundaries relative to optimization constraints can lead to significant performance degradation. To mitigate this, the pa- rameter configuration is constructed based on common factor vectors derived from array and loop bounds, ensuring alignment between optimization directives and hardware-imposed constraints. For nested loop structures, the pruner employs intelligent rule-based elimination to prevent hardware over-utilization. When the outer loop is pipelined, inner loops are automatically fully unrolled, rendering inner loop unroll factor exploration redundant. The pruner detects such scenarios and disables unnecessary unroll directives for inner loops. Additionally, loops with large trip counts trigger pruning of aggressive parallelism configurations, guided by the intuition that excessive unrolling or pipelining often results in impractical resource consumption. This strategy balances exploration efficiency with design flexibility while allowing designers to tailor pruning decisions through a prompt interface for specialized optimization preferences. Structural constraints further refine the pruning rules. For outer loops that are not perfect (only inner- most loop has content, no inter-loop logic and constant bounds), inner loop unrolling is prohibited. In multilayer nested loops, unrolling directives are automatically disabled for the outermost loop to prevent excessive hardware resource utilization. Outer loops containing multiple sub-loops within their bodies are restricted from unrolling. Furthermore, missing directives in configuration files are explicitly
https://arxiv.org/abs/2505.22086v1
set to default values (e.g., unroll factor 0) to prevent undefined tool behaviors. For array partitioning, Complete type partitioning automatically disables partition factors. A.3 Screening Mechanism for Designs with Optimization Potential During non-dominated sorting, we calculate domination counts for each design and establish a domination relationship matrix. Subsequently, we categorize the front layers, placing non-dominated solutions into the first front, directly dominated solutions into the second front, and continuing this classification hierarchically. To preserve population diversity, the crowding distance calculation module evaluates solutions within identical Pareto front layers across both latency and resource consumption dimensions. After arranging the solution set along each objective dimension, crowding distances are determined by calculating Manhattan distances between adjacent solutions in the normalized objective space, with boundary designs receiving maximum priority values. Ultimately, a composite sorting strategy prioritizes solutions in lower front levels while preserving designs with greater crowding distances within the same level. This mechanism ensures both convergence and uniform distribution of solutions along the Pareto front. A.4 iDSE Hyperparameter Configurations iDSE employs an evolutionary algorithm driven by LLMs that employs hyperparameters shown in Table 5. We specifically set the initial number of samples to 12, and the reasons for this choice are detailed in the Appendix D.1. A fixed budget of 3 generations ( Imax) balances design exploration effects and computational budgets. Meanwhile, the adoption of dynamic population size ( Pi) allows for gradual scaling from initial exploration ( N0) to refinement. A non-dominated sorting with crowding distance metrics prioritizes Pareto-optimal solutions, while controlled injection of rank-2 suboptimal candidates C2 subper generation maintains solution diversity. Table 5: Hyperparameter setting in iDSE. Hyperparameter Description Value Imax Number of evolutionary iterations 3 N0 Initial sampling designs for DSE warm-start 12 Pi Population size per iteration stage: [Init, Iteration, ...] [12,∼12] C2 sub Suboptimal candidates selected from Rank 2 in Pareto front 3 18 B Prompt Engineering This section presents our comprehensive approach to prompt LLMs in iDSE workflow. The following subsections detail our prompt architecture across six key LLM agents of the iDSE workflow. Each component represents a key stage in our automated DSE framework, transforming the traditional DSE methods to a more flexible LLM-navigated approach that ensures workflow robustness while remaining automated. We emphasize that the prompts we designed are not tailored to any specific benchmark. A unified prompt template is applied across all benchmarks to ensure the generalization of our methodology across diverse HLS designs. This approach objectively demonstrates the inherent capacity of LLMs in hardware optimization and design space exploration. B.1 Prompt for Feature Extractor In Figure 7, we present a few-shot example along with task descriptions and a structured output format used to prompt the LLM to extract key HLS design structure features. This step replaces the traditional preprocessing of the DSE method, which typically relies on the LLVM compiler to derive necessary structure information, thereby simplifying system deployment. Subsequently, a script identifies all common divisors of the target arrays and loop boundaries, thereby enabling the construction of the configurations for the optimization directives. You function as an advanced code parser, specializing in
https://arxiv.org/abs/2505.22086v1
the decomposition of arrays and loops structure in the C/C++ code. ∘ Arrays: · Extract the array name . · Identify the array dimensions . · Determine the size of each dimension. ∘ Loop s:· Extract the loop name . · Extract outer loop name for the nested loop. · Calculate the maximum number of iterations for the loop. • Critical Constraints: Do not consider pointers, one -dimension array output 'dim': [1], two -dimension array output 'dim': [1, 2], etc. Keep "type": [0, 1, 2] and "pipeline": [0, 1] unchanged. Ensure you only provide the parsed data in the specified JSON format , with no additional explanations . • Reference: Example Input: {sample_hls_design}; Example Output: {sample_directive_config} • Input: Pending HLS design: {hls_design} The structrued output for the parsed input code is as follows: • Expected output: { "arrays":{ "[array_name ]":{ "type": [0, 1, 2], "dim": [array_dimension], "factor_n ": { "1": n_dimension_size, "2": n_dimension_size }}}, "loops": { "[loop_name ]":{ "pipeline ": [0, 1], "unroll": loop_trip_count, "outer_loop ": outer_loop_name (only for nested loop) }}}{ "arrays":{ "[array_name ]":{ "type": [0, 1, 2], "dim": [array_dimension], "factor_n ": { "1": n_dimension_size, "2": n_dimension_size }}}, "loops": { "[loop_name ]":{ "pipeline ": [0, 1], "unroll": loop_trip_count, "outer_loop ": outer_loop_name (only for nested loop) }}}You function as an advanced code parser, specializing in the decomposition of arrays and loops structure in the C/C++ code. ∘ Arrays: · Extract the array name . · Identify the array dimensions . · Determine the size of each dimension. ∘ Loop s:· Extract the loop name . · Extract outer loop name for the nested loop. · Calculate the maximum number of iterations for the loop. • Critical Constraints: Do not consider pointers, one -dimension array output 'dim': [1], two -dimension array output 'dim': [1, 2], etc. Keep "type": [0, 1, 2] and "pipeline": [0, 1] unchanged. Ensure you only provide the parsed data in the specified JSON format , with no additional explanations . • Reference: Example Input: {sample_hls_design}; Example Output: {sample_directive_config} • Input: Pending HLS design: {hls_design} The structrued output for the parsed input code is as follows: • Expected output: { "arrays":{ "[array_name ]":{ "type": [0, 1, 2], "dim": [array_dimension], "factor_n ": { "1": n_dimension_size, "2": n_dimension_size }}}, "loops": { "[loop_name ]":{ "pipeline ": [0, 1], "unroll": loop_trip_count, "outer_loop ": outer_loop_name (only for nested loop) }}} Figure 7: Prompt for Feature Extractor. B.2 Prompt for Feature-Driven Pruning In Figure 8, we prompt the LLM to leverage auxiliary pruning information in the structured JSON output from the previous step, thereby removing invalid (excessive parallelism leads to synthesis failure or duplication of optimization effects) configurations for certain optimization directives. Furthermore, we employ additional scripts to detect and eliminate potentially invalid designs within the pruned design space, as the LLM alone may not fully capture the complex interdependencies among directives. This hybrid pruning strategy preserves full automation while ensuring operational robustness throughout the optimization workflow. 19 You are an expert in high -level synthesis (HLS) design space exploration. Your task is to prune the provided optimization directive configuration file to reduce the design space.
https://arxiv.org/abs/2505.22086v1
You will focus on optimizing three types of pragmas: array_partition, pipeline, and unroll. • Note: ∘ For nested loop , set innermost loop pipeline to 1, if inner loop trip count > 32, set outer loop pipeline to 0. ∘ Apply loop unroll pruning rules : · If loop has variable bound or non -exclusive inner loop body (imperfect loop), set inner loop unroll to 0. · For 3+ level nested loops, set outermost loop unroll to 0. · For more than one sub loop, set outer loop unroll to 0. The configuration file provides parameters for these pragmas: ∘ array_partition : · type: Specifies the partitioning method. 0 represents complete partitioning, 1 indicates block partitioning, and 2 indicates cyclic partitioning. · dim: Specifies the dimension to partition. n specifies the n -th dimension. · factor_n: Indicates the partitioning factor (applied to block or cyclic partition types), 0 denotes perform no partition. ∘ pipeline: Specifies whether to pipeline the current loop. ∘ unroll: Specifies the unrolling factor for the current loop. • Input: HLS Design Code: {hls_design} ; Initial Optimization Directive Configuration File (JSON) : {original_pragma_config} • Expect Output: The updated configuration file in JSON format . { "arrays":{ "[array_name ]":{ "type": [partition_type], "dim": [array_dimension], "factor_n ": { "1": [pruned_partition_factor], ...}}}, "loops": { "[loop_name ]":{ "pipeline ": [pipeline_value], "unroll": [pruned_unroll_factor], "outer_loop ": outer_loop_name (only for nested loop) }}}{ "arrays":{ "[array_name ]":{ "type": [partition_type], "dim": [array_dimension], "factor_n ": { "1": [pruned_partition_factor], ...}}}, "loops": { "[loop_name ]":{ "pipeline ": [pipeline_value], "unroll": [pruned_unroll_factor], "outer_loop ": outer_loop_name (only for nested loop) }}}You are an expert in high -level synthesis (HLS) design space exploration. Your task is to prune the provided optimization directive configuration file to reduce the design space. You will focus on optimizing three types of pragmas: array_partition, pipeline, and unroll. • Note: ∘ For nested loop , set innermost loop pipeline to 1, if inner loop trip count > 32, set outer loop pipeline to 0. ∘ Apply loop unroll pruning rules : · If loop has variable bound or non -exclusive inner loop body (imperfect loop), set inner loop unroll to 0. · For 3+ level nested loops, set outermost loop unroll to 0. · For more than one sub loop, set outer loop unroll to 0. The configuration file provides parameters for these pragmas: ∘ array_partition : · type: Specifies the partitioning method. 0 represents complete partitioning, 1 indicates block partitioning, and 2 indicates cyclic partitioning. · dim: Specifies the dimension to partition. n specifies the n -th dimension. · factor_n: Indicates the partitioning factor (applied to block or cyclic partition types), 0 denotes perform no partition. ∘ pipeline: Specifies whether to pipeline the current loop. ∘ unroll: Specifies the unrolling factor for the current loop. • Input: HLS Design Code: {hls_design} ; Initial Optimization Directive Configuration File (JSON) : {original_pragma_config} • Expect Output: The updated configuration file in JSON format . { "arrays":{ "[array_name ]":{ "type": [partition_type], "dim": [array_dimension], "factor_n ": { "1": [pruned_partition_factor], ...}}}, "loops": { "[loop_name ]":{ "pipeline ": [pipeline_value], "unroll": [pruned_unroll_factor], "outer_loop ": outer_loop_name (only for nested loop)
https://arxiv.org/abs/2505.22086v1
}}}Figure 8: Prompt for Feature-Driven Pruning. B.3 Prompt for Seed Directive Generation Figure 9 details our hierarchical prompt architecture integrating task descriptions, sampling objectives, and functional descriptions of optimization directives. The sampling objectives are organized through three meticulously defined optimization regimes: •Prioritize Performance. Maximize computational throughput through aggressive parallelism, accepting elevated resource utilization. •Prioritize Resource Utilization. Minimize hardware consumption via conservative parallelism allocation, prioritizing area efficiency over high performance. •Performance-Cost Trade-off. Identify unroll factor configurations that strike a feasible balance between performance and hardware resource budgets. These distinct objectives help ensure that the initial sampling designs achieve broad coverage of the Pareto front. Furthermore, the embedded directive specifications ground LLM reasoning in domain-specific HLS constraints, and bridge the semantic gap between natural language instructions and hardware synthesis requirements through structured knowledge injection. You are tasked with generating {num_sample} unique and effective pragma configuration combinations for optimizing the provided High -Level Synthesis (HLS) design. These configurations must target the following optimization goals while strictly adhering to the condition that only parameters available in the provided configuration file can be used. ∘ Performance Optimization: Achieve the best performance, which will result in higher hardware consumption. ∘ Resource Utilization Optimization: Minimize hardware resource usage, which may result in reduced performance. ∘ PPA Tradeoff Optimization: Find a balanced unroll factor combination that offers a good tradeoff between performance and hardware utilization. The configuration file provides parameters for pragmas:• Note: < Direcitive configuration ... > Make Sure every loop and array (every dim) has its configuration. IF you think it's unneccesary, set corresponding unroll and factor to 0. • Input: HLS Design Code: {hls_design} ; Pragma parameter you can pick : {pragma_config} • Expect Output: [(({'name': [loop_name ], 'pipeline ': [value], 'unroll': [value]}}, {...}), ({'name': [array_name ], 'type': [value], 'dim': [value], 'factor': [value]}}, {...})), ... ]You are tasked with generating {num_sample} unique and effective pragma configuration combinations for optimizing the provided High -Level Synthesis (HLS) design. These configurations must target the following optimization goals while strictly adhering to the condition that only parameters available in the provided configuration file can be used. ∘ Performance Optimization: Achieve the best performance, which will result in higher hardware consumption. ∘ Resource Utilization Optimization: Minimize hardware resource usage, which may result in reduced performance. ∘ PPA Tradeoff Optimization: Find a balanced unroll factor combination that offers a good tradeoff between performance and hardware utilization. The configuration file provides parameters for pragmas:• Note: < Direcitive configuration ... > Make Sure every loop and array (every dim) has its configuration. IF you think it's unneccesary, set corresponding unroll and factor to 0. • Input: HLS Design Code: {hls_design} ; Pragma parameter you can pick : {pragma_config} • Expect Output: [(({'name': [loop_name ], 'pipeline ': [value], 'unroll': [value]}}, {...}), ({'name': [array_name ], 'type': [value], 'dim': [value], 'factor': [value]}}, {...})), ... ] 20 You are tasked with generating {num_sample} unique and effective pragma configuration combinations for optimizing the provided High -Level Synthesis (HLS) design. These configurations must target the following optimization goals while strictly adhering to the condition that only parameters available in the provided
https://arxiv.org/abs/2505.22086v1
configuration file can be used. ∘ Performance Optimization: Achieve the best performance, which will result in higher hardware consumption. ∘ Resource Utilization Optimization: Minimize hardware resource usage, which may result in reduced performance. ∘ PPA Tradeoff Optimization: Find a balanced unroll factor combination that offers a good tradeoff between performance and hardware utilization. The configuration file provides parameters for pragmas:• Note: < Direcitive configuration ... > Make Sure every loop and array (every dim) has its configuration. IF you think it's unneccesary, set corresponding unroll and factor to 0. • Input: HLS Design Code: {hls_design} ; Pragma parameter you can pick : {pragma_config} • Expect Output: [(({'name': [loop_name ], 'pipeline ': [value], 'unroll': [value]}}, {...}), ({'name': [array_name ], 'type': [value], 'dim': [value], 'factor': [value]}}, {...})), ... ]You are tasked with generating {num_sample} unique and effective pragma configuration combinations for optimizing the provided High -Level Synthesis (HLS) design. These configurations must target the following optimization goals while strictly adhering to the condition that only parameters available in the provided configuration file can be used. ∘ Performance Optimization: Achieve the best performance, which will result in higher hardware consumption. ∘ Resource Utilization Optimization: Minimize hardware resource usage, which may result in reduced performance. ∘ PPA Tradeoff Optimization: Find a balanced unroll factor combination that offers a good tradeoff between performance and hardware utilization. The configuration file provides parameters for pragmas:• Note: < Direcitive configuration ... > Make Sure every loop and array (every dim) has its configuration. IF you think it's unneccesary, set corresponding unroll and factor to 0. • Input: HLS Design Code: {hls_design} ; Pragma parameter you can pick : {pragma_config} • Expect Output: [(({'name': [loop_name ], 'pipeline ': [value], 'unroll': [value]}}, {...}), ({'name': [array_name ], 'type': [value], 'dim': [value], 'factor': [value]}}, {...})), ... ]Figure 9: Prompt for Seed Directive Generation. B.4 Prompt for QoR-Aware Adaptive Optimization In this section, we present in detail the prompts in our QoR-Aware Adaptive Optimization system. In Figure 10, the hierarchical structure of prompt for optimization trajectory reflection comprises: •Parent Selection Mechanism. A rule-based prioritization balancing Pareto dominance and solution diversity for parental candidate selection. •Bottleneck Identification. Systematic bottleneck identification targeting loop optimization, mem- ory access patterns, and resource saturation thresholds to guide iterative refinements. •QoR-Aware Adaptation : Synthesized circuit performance metrics (e.g., latency and resource utilization) link directive configurations to hardware implementation evaluations, allowing LLMs to trace optimization causality quantitatively. We are conducting HLS design space exploration. Select at least {parent_num} optimal parental candidates from the provided configuration for subsequent crossover and mutation operations. Prioritize solutions demonstrating strong Pareto dominance characteristics and preservation of solution diversity. Evaluate and analyze the bottlenecks that still exist at your selected candidate points to provide a sufficiently informative reference for the subsequent optimization, with around 3 sentences of analysis for each selected point. ∘ Primary Sorting Criteria:• Selection Principle : · Non-dominated Rank Priority (Lower rank = Higher priority) · Crowding Distance Comparison (Higher distance = Better diversity) ∘ Multi -Objective Analysis: · For equal -rank: Prefer configurations with higher crowding distances to maintain population diversity · Across different
https://arxiv.org/abs/2505.22086v1
ranks: Favor lower -rank solutions even with lower crowding distances • Bottleneck Analysis Reasoning : ∘ Step 1: Loop Optimization Check . Check loop -carried dependencies blocking pipeline/unroll optimizations. ∘ Step 2: Memory Access Check . Verify array partitioning factors match unroll/pipeline parallelism. ∘ Step 3: Resource Saturation Check Resource Saturation . Identify DSP/BRAM overuse from aggressive unrolling or mismatched partitioning. • Configuration Parameters Description :< Direcitive configuration ... > • Input: Input Sample Configuration and Corresponding QoR: {sample_config} ; HLS Design Structure: {kernel_code} • Expect Output: [{ "combination_index ": [int], "loop_combination ": [Full original structure], "array_combination ": [Original structure], "selection_reason ": { "dominance_characteristic ": "Pareto Rank | Crowding Distance Comparison ", "optimization_potential ": "[identify current computation/memory_access bottleneck] "}...] Format selected parents with selection rationale:We are conducting HLS design space exploration. Select at least {parent_num} optimal parental candidates from the provided configuration for subsequent crossover and mutation operations. Prioritize solutions demonstrating strong Pareto dominance characteristics and preservation of solution diversity. Evaluate and analyze the bottlenecks that still exist at your selected candidate points to provide a sufficiently informative reference for the subsequent optimization, with around 3 sentences of analysis for each selected point. ∘ Primary Sorting Criteria:• Selection Principle : · Non-dominated Rank Priority (Lower rank = Higher priority) · Crowding Distance Comparison (Higher distance = Better diversity) ∘ Multi -Objective Analysis: · For equal -rank: Prefer configurations with higher crowding distances to maintain population diversity · Across different ranks: Favor lower -rank solutions even with lower crowding distances • Bottleneck Analysis Reasoning : ∘ Step 1: Loop Optimization Check . Check loop -carried dependencies blocking pipeline/unroll optimizations. ∘ Step 2: Memory Access Check . Verify array partitioning factors match unroll/pipeline parallelism. ∘ Step 3: Resource Saturation Check Resource Saturation . Identify DSP/BRAM overuse from aggressive unrolling or mismatched partitioning. • Configuration Parameters Description :< Direcitive configuration ... > • Input: Input Sample Configuration and Corresponding QoR: {sample_config} ; HLS Design Structure: {kernel_code} • Expect Output: [{ "combination_index ": [int], "loop_combination ": [Full original structure], "array_combination ": [Original structure], "selection_reason ": { "dominance_characteristic ": "Pareto Rank | Crowding Distance Comparison ", "optimization_potential ": "[identify current computation/memory_access bottleneck] "}...] Format selected parents with selection rationale: Figure 10: Prompt for optimization trajectory reflection. 21 In Figure 11, we prompt the LLM for bottleneck analysis with QoR-aware adaptation . It first identify compute/memory bottlenecks in selected HLS designs with optimization potential, and then perform oriented optimizations. To maintain genetic stability, we enforce a parameter preservation by retaining a sufficient subset of parent configurations. Act as an HLS design optimization engine. Generate {num_crossover} valid configurations through bottleneck analysis. Follow this protocol: You will be given directive combinations that you can refer to, along with the corresponding bottleneck analysis. ∘ Optimization Strategy:• Optimization Reasoning : · Step 1: Identify Bottleneck Type - Compute -bound: Check for high loop latency/resource usage. - Memory -bound: Analyze array access patterns/bandwidth. · Step 3: Validate & Ensure Novelty - Reflect to ensure generated directive combination different from the given reference designs.· Step 2: Apply Targeted Optimization - Compute: Gradually increase the
https://arxiv.org/abs/2505.22086v1
granularity of unroll with computationally bottlenecked loops. - Memory: Partition arrays related to loops (block for sequential, cyclic for parallel) with partition factor ≥ unroll factor. - When the bottleneck is computation, prioritize the application of fine -grained pipelines (pipeline on); - When the bottleneck is memory access, prioritize the application of coarse -grained pipelines (pipeline off). • Configuration Parameters Description :< Direcitive configuration ... > • Input: Elite Parent Configurations: {parent_config} ; HLS Design Code: {kernel_code} ; Valid Pragma Parameter: {config_loop_array} • Expect Output: Generate exactly {num_crossover} configurations in this format: [(({'name': [loop_name ], 'pipeline ': [value], 'unroll': [value]}}, {...}), ({'name': [array_name ], 'type': [value], 'dim': [value], 'factor': [value]}}, {...})), ... ]Act as an HLS design optimization engine. Generate {num_crossover} valid configurations through bottleneck analysis. Follow this protocol: You will be given directive combinations that you can refer to, along with the corresponding bottleneck analysis. ∘ Optimization Strategy:• Optimization Reasoning : · Step 1: Identify Bottleneck Type - Compute -bound: Check for high loop latency/resource usage. - Memory -bound: Analyze array access patterns/bandwidth. · Step 3: Validate & Ensure Novelty - Reflect to ensure generated directive combination different from the given reference designs.· Step 2: Apply Targeted Optimization - Compute: Gradually increase the granularity of unroll with computationally bottlenecked loops. - Memory: Partition arrays related to loops (block for sequential, cyclic for parallel) with partition factor ≥ unroll factor. - When the bottleneck is computation, prioritize the application of fine -grained pipelines (pipeline on); - When the bottleneck is memory access, prioritize the application of coarse -grained pipelines (pipeline off). • Configuration Parameters Description :< Direcitive configuration ... > • Input: Elite Parent Configurations: {parent_config} ; HLS Design Code: {kernel_code} ; Valid Pragma Parameter: {config_loop_array} • Expect Output: Generate exactly {num_crossover} configurations in this format: [(({'name': [loop_name ], 'pipeline ': [value], 'unroll': [value]}}, {...}), ({'name': [array_name ], 'type': [value], 'dim': [value], 'factor': [value]}}, {...})), ... ] Figure 11: Prompt for bottleneck analysis with QoR-aware adaptation. In Figure 12, we prompt the LLM for divergence-enhanced design refactoring to move beyond the constrained design space and generate novel directive configurations. This approach generates configurations distinct from selected parent patterns. We prioritize loop optimizations while enforcing hardware scheduling constraints to ensure design feasibility. For example, restrictions are imposed on outer loop operations through trip count threshold control, enforcing the power-of-two property of the unfolding factor, and differentiating partition type selection based on the array dimension property. Act as an HLS design optimization engine. Generate {num_mutation} new directive combinations that are completely different from the optimization ideas of the reference design. You will be given directive combinations that you can refer to, along with the corresponding bottleneck analysis. • Optimization Reasoning : ∘ Optimization Principle : · Avoid pipelining outer loops with tripcount > 64 · Validate unroll factors as powers of 2 · Prioritize block partitioning for multi -dimensional arrays, and consider cyclic for one -dimensional arrays. ∘ Combinatorial Innovation: · Layer directives across adjacent loops (e.g., inner -pipeline + outer -unroll) · Explore novel factor combinations ∘ Validation Checks: · Prevent resource conflict
https://arxiv.org/abs/2505.22086v1
between parallel directives · Ensure partition factor does not exceed array dimension • Configuration Parameters Description :< Direcitive configuration ... > • Input: Elite Parent Configurations: {parent_config} ; HLS Design Code: {kernel_code} ; • Expect Output: Generate exactly {num_crossover} configurations in this format: [(({'name': [loop_name ], 'pipeline ': [value], 'unroll': [value]}}, {...}), ({'name': [array_name ], 'type': [value], 'dim': [value], 'factor': [value]}}, {...})), ... ]Act as an HLS design optimization engine. Generate {num_mutation} new directive combinations that are completely different from the optimization ideas of the reference design. You will be given directive combinations that you can refer to, along with the corresponding bottleneck analysis. • Optimization Reasoning : ∘ Optimization Principle : · Avoid pipelining outer loops with tripcount > 64 · Validate unroll factors as powers of 2 · Prioritize block partitioning for multi -dimensional arrays, and consider cyclic for one -dimensional arrays. ∘ Combinatorial Innovation: · Layer directives across adjacent loops (e.g., inner -pipeline + outer -unroll) · Explore novel factor combinations ∘ Validation Checks: · Prevent resource conflict between parallel directives · Ensure partition factor does not exceed array dimension • Configuration Parameters Description :< Direcitive configuration ... > • Input: Elite Parent Configurations: {parent_config} ; HLS Design Code: {kernel_code} ; • Expect Output: Generate exactly {num_crossover} configurations in this format: [(({'name': [loop_name ], 'pipeline ': [value], 'unroll': [value]}}, {...}), ({'name': [array_name ], 'type': [value], 'dim': [value], 'factor': [value]}}, {...})), ... ] Figure 12: Prompt for divergence-enhanced design refactoring. 22 C Experiment Details All experiments were conducted on an Intel Xeon Platinum 8378A server running Ubuntu 20.04.6. The designs were implemented using Vitis HLS 2022.1, targeting the Xilinx ZCU106 MPSoC platform under a maximum synthesis time constraint of 20 minutes. Evaluation of explored directive configurations exceeding this threshold typically resulted from excessively aggressive parallelization strategies that surpassed the available hardware resources. C.1 Evaluation Metrics The average distance to reference set (ADRS) is a widely adopted metric in High-Level Synthesis (HLS) to evaluate the quality of design space exploration (DSE). This metric quantifies the gap between an explored Pareto front PE, generated by a DSE method, and a reference Pareto front PR, which represents the best-known optimal solutions. ADRS is defined as the average normalized distance from each design in PRto its closest counterpart in PE. Formally, the metric is computed by averaging the minimum relative degradation across all objectives for every reference design, ensuring a reasonable comparison that accounts for the latency ( Lat) and resource utilization ( Util) trade-offs. ADRS (PE, PR) =1 |PR|X λ(φγ)∈PRmin λ(φω)∈PEd (4) The distance d(·)in ADRS is designed to measure the relative degradation of solutions in PE compared to those in PR. For a reference design λ(φγ)∈PR, the distance to its nearest neighbor λ(φω)∈PEis calculated by taking the maximum relative difference in latency and resource utilization. This difference is normalized by the objective values of reference design to ensure scale invariance, avoiding bias toward objectives with large ranges. Critically, d(·)only penalizes solutions inPEthat underperform relative to PR. If a design in PEdominates or matches the reference design in both objectives, the distance is
https://arxiv.org/abs/2505.22086v1
zero. This evaluation aligns with practical DSE goals, where the primary focus is to approximate or exceed the reference front rather than explore regions beyond it. d= max 0,Lat(H, λ(φω))−Lat(H, λ(φγ)) Lat(H, λ(φγ)),Util(H, λ(φω))−Util(H, λ(φγ)) Util(H, λ(φγ)) (5) ADRS is suitable for HLS DSE due to the inherent irregularities in Pareto fronts generated dur- ing exploration. In HLS, the interplay between compiler optimizations, resource constraints, and latency-performance trade-offs often leads to non-smooth, discontinuous, or clustered Pareto fronts. Traditional metrics like hypervolume (HV), which measures the volume of the objective space domi- nated by a Pareto front, struggle to provide reliable comparisons in such scenarios. The sensitivity of HV to the shape and continuity of the front requires convexity and smoothness for meaningful interpretation, rendering it less effective when fronts exhibit abrupt transitions or sparse distributions. In contrast, ADRS circumvents these limitations by focusing on pairwise proximity between PEand PR, making it robust to irregularities in the front geometry. In the context of HLS design evaluation, the latency ( Lat) is derived from synthesis timing analysis rather than post-implementation routing delays. This metric captures intrinsic circuit performance determined by logic-level optimization decisions, eliminating variations introduced by layout and wiring during the implementation phase. For resource utilization ( Util), we define it as a weighted sum of critical FPGA schedulable resources. The utilization metric is formulated as: Util(H, λ(φ)) =WLUT·LUT +WFF·FF+WDSP·DSP +WBRAM ·BRAM (6) where LUT ,FF,DSP , and BRAM represent the normalized usage ratios of lookup tables, flip-flops, digital signal processors, and block RAMs, respectively. The weighting coefficients (WLUT = 0.3,WFF= 0.25,WDSP = 0.3,WBRAM = 0.05) prioritize DSP and LUT resources due to their stronger correlation with computational throughput, while BRAM allocation is discounted as memory blocks are typically optimized separately through dedicated directives. By calibrating weights to specific hardware resources, it formalizes how HLS compiler decisions inherently balance computational density and memory bandwidth availability during design space exploration. 23 C.2 Benchmark Descriptions and Design Structures Table 6: Benchmark description with design structure. Benchmark Description # Loop # Array atax Dual matrix-vector multiplications y=AT(Ax) 4 4 bicg Biconjugate gradient stabilized method q=Ap,s=ATr 3 5 gemm BLAS general matrix multiply Cout=αAB +βC 4 3 gesummv Summed matrix-vector multiplications y=αAx +βBx 2 5 mvt Dual matrix-vector products x1 =x1 +Ay1,x2 =x2 +ATy2 4 5 md-knn Molecular dynamics force computation with neighbor lists 2 7 spmv Sparse matrix-vector multiplication 2 4 stencil2d 2D convolution with 3x3 kernel 4 3 stencil3d 3D stencil computation with two coefficients 9 3 viterbi Dynamic programming for optimal hidden state sequence in HMM 7 6 sha SHA-1 cryptographic hash computation rounds 6 3 autocorr Autocorrelation computation with scaling and lag products 6 0 C.3 Heuristic-Based DSE Configurations Table 7: Baseline DSE method hyperparameter setting. Baseline Hyperparameter Settings NSGA-IIInitial samples = 12, Total generations = 8, Crossover probability = 0.9, Mutation probability = 0.3 MOEA/DPopulation size=12, Max evaluations=100, Neighbor size=5, Mutation rate=0.1, Tchebycheff decomposition ACOIterations=8, Ants=12, Evaporation rate=0.1, Pheromone update Q=100, Initial pheromone=1.0 Lattice [21]Initial Beta sampling ( α=0.1, β=0.1), Max samples=100, Lattice radius=0.5, SphereTree neighbor search HGBO [31]LHS init trials=10, Total
https://arxiv.org/abs/2505.22086v1
trials=100, EHVI candidates=24, Prior weight=1.0 D Supplementary Results D.1 Determination of Initial Sample Sizes Our experimental observations reveal that increased initial sampling quantities generally enhance Pareto front coverage across benchmarks, as indicated by the progressive convergence toward refer- ence fronts through diminishing ADRS values, as shown in Figure 13. While this trend demonstrates that approximation quality improves with sample size, strictly monotonic ADRS reduction does not universally occur. Furthermore, excessive sampling leads to output truncation and quality degra- dation due to the inherent limitations in the long-context processing capabilities of LLM. Through evaluation of the trade-off between coverage and computational complexity, we identified an optimal configuration of 12initial samples that effectively balances approximation fidelity with computational efficiency. To ensure fair comparison across experimental evaluations, we uniformly applied this sample quantity across various heuristic-based DSE methods. This initialization strategy facilitates effective warm-start exploration while maintaining minimal computational budgets, highlighting both the efficacy of our sampling approach and the overall efficiency of iDSE . 24 236912150.40.60.8 atax 236912150.51.0 bicg 2369121512 gemm 236912150.20.40.60.8 gesummv 236912150.40.60.8 mvt 236912150.010.020.03 md-knn 236912150246 spmv 2369121524 stencil2d 2369121512 stencil3d 236912150.0280.0300.032 viterbi 236912150.51.0 sha 236912150.150.20 autocorr # of SynthesisMean ADRSFigure 13: Optimal initial sample size selection for efficient Pareto front coverage D.2 Construction of Pareto Front through Initial Single-Batch Sampling In this section, we present detailed analysis of the data shown in Table 2. For our experiments, we configured NSGA-II with 12 initial sampling designs for each benchmark, eventually reaching ap- proximately 108 populations. The ADRS explored during the DSE process serves as our comparative baseline. Table 8 enumerates the number of explored designs required by ACO, MOEA/D, Lattice, and HGBO-DSE to reach or fall below the target ADRS. When these DSE methods failed to meet the target ADRS within the limited search budget, we defined the result as the maximum population size of the baseline method. The results demonstrate that iDSE significantly outperforms other DSE methods, achieving efficiency improvements of 25.1 ×over NSGA-II, 7.6 ×over ACO, 7.0 ×over MOEA/D, 11.3 ×over Lattice, and 1.7 ×over HGBO-DSE. Notably, we observed that the ADRS threshold established in the baseline was consistently achieved across all benchmarks during just theWarm-Start phase of iDSE. This finding primarily reveals that our LLM-guided Seed Directive Generation method can construct superior Pareto fronts through single-batch sampling. Furthermore, it highlights the great potential of iDSE for enhancing DSE efficiency, surpassing subsequent adaptive optimization phases by providing designers with insightful designs in earlier stages. Table 8: Number of explored designs required by different DSE methods to reach target ADRS. PolyBench [90] MachSuite [92] CHStone [91] Speedup DSE Methodsatax bicg gemm gesummv mvt md-knn spmv stencil2d stencil3d viterbi sha autocorr Avg Geo Mean NSGA-II 108 108 108 108 108 108 108 108 108 108 108 108 1 1 ACO 19 2 9 67 108 108 108 9 8 108 61 105 8.80× 3.31× MOEA/D 18 14 9 55 13 108 46 20 25 108 39 36 4.65× 3.56× Lattice 13 108 108 108 108 93 2 59 99 23 108 36 6.59× 2.22× HGBO-DSE 3 2 4 25 5 1 1
https://arxiv.org/abs/2505.22086v1
11 21 9 81 66 32.40× 14.34× iDSE 3 3 6 4 5 5 1 6 3 10 6 7 30.54× 25.07× Figure 14 illustrates the initial explored Pareto fronts obtained from different benchmarks under the previously determined number of initial samples. As shown, the Random Sampling (RS) ,U-shaped Beta Sampling (BS) , and Latin Hypercube Sampling (LHS) methods depend solely on probabilistic distributions, producing sparsely distributed and irregularly scattered designs. Consequently, these methods struggle to achieve sufficiently broad coverage or to closely approximate the reference Pareto front. In contrast, our Seed Directive Generation approach leverages domain-specific prior knowledge to guide single-batch sampling, shaping a more comprehensive and more concave Parent front that incorporates multiple promising design configurations. This comparative analysis confirms that, even during the preliminary stages of exploration, our approach achieves close alignment with the ground-truth Pareto front while incurring minimal HLS compilation overhead. The results underscore the critical role of representative seed designs in accelerating early-phase design space exploration without compromising solution quality. iDSE significantly improves exploration efficiency through initial sampling warm-start for rapid approximation of the reference Pareto fronts. This indicates that iDSE can rapidly converge, thereby reducing the time overhead required for hardware optimization while avoiding potential degradation issues that might arise from long-context prompts to LLMs. 25 /uni00000013/uni00000011/uni00000013 /uni00000013/uni00000011/uni00000014/uni00000013/uni00000011/uni00000013/uni00000013/uni00000013/uni00000011/uni00000013/uni00000018 /uni00000044/uni00000057/uni00000044/uni0000005b /uni00000013/uni00000011/uni00000013/uni00000013 /uni00000013/uni00000011/uni00000013/uni00000018 /uni00000013/uni00000011/uni00000014/uni00000013/uni00000013/uni00000011/uni00000013/uni00000013/uni00000013/uni00000013/uni00000011/uni00000013/uni00000015/uni00000018/uni00000013/uni00000011/uni00000013/uni00000018/uni00000013 /uni00000045/uni0000004c/uni00000046/uni0000004a /uni00000013/uni00000011/uni00000013 /uni00000013/uni00000011/uni00000015 /uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000013/uni00000013/uni00000013/uni00000011/uni00000013/uni00000018 /uni0000004a/uni00000048/uni00000050/uni00000050 /uni00000013/uni00000011/uni00000013 /uni00000013/uni00000011/uni00000014 /uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000014 /uni0000004a/uni00000048/uni00000056/uni00000058/uni00000050/uni00000050/uni00000059 /uni00000013/uni00000011/uni00000013 /uni00000013/uni00000011/uni00000014 /uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000014/uni00000031/uni00000052/uni00000055/uni00000050/uni00000044/uni0000004f/uni0000004c/uni0000005d/uni00000048/uni00000047/uni00000003/uni00000038/uni00000057/uni0000004c/uni0000004f/uni0000004c/uni0000005d/uni00000044/uni00000057/uni0000004c/uni00000052/uni00000051 /uni00000050/uni00000059/uni00000057 /uni00000013/uni00000011/uni00000013/uni00000013 /uni00000013/uni00000011/uni00000013/uni00000018 /uni00000013/uni00000011/uni00000014/uni00000013 /uni00000013/uni00000011/uni00000014/uni00000018/uni00000013/uni00000011/uni00000014/uni00000013/uni00000013/uni00000011/uni00000014/uni00000018/uni00000013/uni00000011/uni00000015/uni00000013 /uni00000050/uni00000047/uni00000010/uni0000004e/uni00000051/uni00000051 /uni00000013/uni00000011/uni00000013 /uni00000013/uni00000011/uni00000014/uni00000013/uni00000011/uni00000013/uni00000013/uni00000013/uni00000011/uni00000013/uni00000015 /uni00000056/uni00000053/uni00000050/uni00000059 /uni00000013/uni00000011/uni00000013/uni00000013 /uni00000013/uni00000011/uni00000013/uni00000015 /uni00000013/uni00000011/uni00000013/uni00000017/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000014/uni00000013/uni00000011/uni00000015 /uni00000056/uni00000057/uni00000048/uni00000051/uni00000046/uni0000004c/uni0000004f/uni00000015/uni00000047 /uni00000013/uni00000011/uni00000013 /uni00000013/uni00000011/uni00000018 /uni00000014/uni00000011/uni00000013/uni00000013/uni00000011/uni00000013/uni00000013/uni00000013/uni00000011/uni00000013/uni00000018 /uni00000056/uni00000057/uni00000048/uni00000051/uni00000046/uni0000004c/uni0000004f/uni00000016/uni00000047 /uni00000013/uni00000011/uni00000013/uni00000013 /uni00000013/uni00000011/uni00000015/uni00000018 /uni00000013/uni00000011/uni00000018/uni00000013/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017 /uni00000059/uni0000004c/uni00000057/uni00000048/uni00000055/uni00000045/uni0000004c /uni00000013/uni00000011/uni00000018 /uni00000014/uni00000011/uni00000013 /uni00000031/uni00000052/uni00000055/uni00000050/uni00000044/uni0000004f/uni0000004c/uni0000005d/uni00000048/uni00000047/uni00000003/uni0000002f/uni00000044/uni00000057/uni00000048/uni00000051/uni00000046/uni0000005c/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000018 /uni00000044/uni00000058/uni00000057/uni00000052/uni00000046/uni00000052/uni00000055/uni00000055 /uni00000013/uni00000011/uni00000015/uni00000018 /uni00000013/uni00000011/uni00000018/uni00000013 /uni00000013/uni00000011/uni0000001a/uni00000018/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017 /uni00000056/uni0000004b/uni00000044 /uni00000035/uni00000036/uni00000003 /uni00000025/uni00000036/uni00000003 /uni0000002f/uni0000002b/uni00000036/uni00000003 /uni0000003a/uni00000044/uni00000055/uni00000050/uni00000010/uni00000036/uni00000057/uni00000044/uni00000055/uni00000057/uni00000003 /uni0000003a/uni00000044/uni00000055/uni00000050/uni00000010/uni00000036/uni00000057/uni00000044/uni00000055/uni00000057/uni00000003/uni00000033/uni00000044/uni00000055/uni00000048/uni00000057/uni00000052/uni00000003/uni00000029/uni00000055/uni00000052/uni00000051/uni00000057 /uni00000033/uni00000044/uni00000055/uni00000048/uni00000057/uni00000052/uni00000010/uni00000032/uni00000053/uni00000057/uni0000004c/uni00000050/uni00000044/uni0000004f/uni00000003/uni00000027/uni00000048/uni00000056/uni0000004c/uni0000004a/uni00000051/uni00000056 /uni00000033/uni00000044/uni00000055/uni00000048/uni00000057/uni00000052/uni00000003/uni00000029/uni00000055/uni00000052/uni00000051/uni00000057Figure 14: Comparison of Pareto fronts constructed by different initial sampling designs D.3 Impact of Initial Sampling and Subsequent Search on Heuristic-Based DSE To further validate our hypothesis that retaining advantageous genetic factors (partial parallelism for arrays, loops) can effectively leverage evolutionary algorithms for DSE in hardware opti- mization tasks, we decomposed the data in Table 3 into two distinct phases: an initial sampling phase (Table 9) and a subsequent search phase (Table 10). Our analysis reveals that the qual- ity of sampling designs improves proportionally with the enhanced rationality of the sampling methodology. Notably, NSGA-II demonstrates a dependency between the effectiveness of its subsequent search and the quality of initial sampling, suggesting that this heuristic successfully preserves superior genetic factors during optimization to drive efficient DSE. In contrast, in ACO, the ADRS values obtained during the search phase exhibit no clear correlation with the initial sampling quality. The improvements in the quality and dispersion of the initial sampling design provided by BS and LHS compared to RS are not reflected in the subsequent search phases of ACO, Table 9: ADRS comparison of different initial sam- pling methods. Benchmark RS BS LHS Warm-Start atax 2.3900 1.1054 0.5737 0.3116 bicg 1.0109 0.3685 1.0765 0.2428 gemm 1.1061 0.5100 1.0581 0.4581 gesummv 0.6498 0.6508 0.9751 0.5575 mvt 1.8678 0.8544 0.6403 0.4193 md-knn 0.0217 0.0208 0.0064 0.0064 spmv 0.4744 1.0585 0.8129 0.0592 stencil2d 1.7985 0.4616 1.2631 0.3622 stencil3d 1.7009 0.6244 1.5550 0.1780 viterbi 0.1109 0.1069 0.0300 0.0158 sha 0.3658 0.1626
https://arxiv.org/abs/2505.22086v1
0.7494 0.1521 autocorr 0.0883 0.0702 0.0931 0.7458 Avg 1 1.9097 × 1.7808× 4.6109× Geo Mean 1 1.6508 × 1.3656× 3.1709×as evidenced by the lack of significant differ- ences among the indicators in the last three columns of Table 10. Unlike evolutionary al- gorithms that directly inherit and recombine population genes through crossover and muta- tion operations, ACO relies on the accumulation and evaporation of pheromone trails. While ini- tial sampling designs may provide some prior information, the algorithm continuously weak- ens the influence of historical paths through pheromone evaporation mechanisms, while dy- namically covering trajectories in low-quality regions through path reconstruction. This mech- anism causes ACO to rely more on real-time feedback during the iteration process rather than the static distribution of the initial population. This empirical observation reinforces the ratio- nale behind our methodology, which combines 26 Table 10: ADRS comparison of search efficiency. NSGA-II MOEA/D ACO BenchmarkRS BS LHS Warm-Start RS BS LHS Warm-Start RS BS LHS atax 2.3900 1.1054 0.5767 0.3513 0.9146 0.5794 0.5718 0.2536 2.5109 1.8452 2.5571 bicg 1.0109 0.4150 1.0771 0.2463 0.5229 0.3147 0.4738 0.3212 0.2711 0.4205 0.3480 gemm 1.1065 0.5250 1.0584 0.4581 0.5240 0.5658 0.6137 0.9358 0.6785 0.5699 0.7181 gesummv 0.6498 0.6595 0.9752 0.5579 0.4671 0.4182 0.3954 0.3761 0.4124 0.4188 0.4248 mvt 1.8678 0.8715 0.6403 0.4420 2.7230 3.5088 3.8933 3.0261 2.0321 2.0427 1.6964 md-knn 0.0217 0.0216 0.0064 0.0064 0.0246 0.0224 0.0306 0.0526 0.0250 0.0232 0.0215 spmv 0.4744 1.0585 0.8182 0.0592 0.3582 1.1170 1.3312 0.5785 1.0334 1.7828 1.3483 stencil2d 1.8879 0.5093 1.2729 0.3764 0.3378 0.6214 0.6710 0.4525 0.8438 1.1927 1.5266 stencil3d 1.7030 0.6244 1.5572 0.1780 0.4583 0.6682 0.3659 0.4397 0.7114 0.6358 0.7699 viterbi 0.1111 0.1206 0.2769 0.0158 0.2327 0.2502 0.2460 0.1270 0.1743 0.1363 0.1428 sha 0.3658 0.1626 0.7494 0.7458 0.2790 0.1302 0.2152 0.2095 0.3946 0.2313 0.2584 autocorr 0.0913 0.0702 0.0933 0.1521 0.0726 0.0966 0.1254 0.0673 0.0865 0.0828 0.0737 Avg 1 1.8492 × 1.5113 × 4.4010 × 1.9408 × 1.8798 × 1.8042 × 2.4461 × 1.4485 × 1.4221 × 1.3544 × Geo Mean 1 1.6026 × 1.1404 × 3.1335 × 1.5510 × 1.4415 × 1.3004 × 1.6552 × 1.2223 × 1.2368 × 1.2046 × LLM-enhanced initial sampling with subsequent genetic-inspired search algorithms to effectively explore high-quality designs. Lattice achieves favorable ADRS in smaller-scale designs by utilizing U-shaped Beta sampling [ 21]. This sampling approach is based on prior optimization experience that designs with either very high or very low parallelism often occupy opposite ends of the Pareto front (performance-optimal with higher hardware consumption versus resource-efficient with performance trade-offs). U-shaped Beta sampling (BS) generates the initial configuration set φ0and derives the approximation of the Pareto front. For a sampling size N0, the space φ0comprises nunique feature vectors φ, where each element of φis sampled probabilistically from a symmetric Beta distribution. The probability density function is defined over 0≤x≤1as: φ0(x) =xα−1(1−x)α−1 B(α), (7) B(α) =Z1 0xα−1(1−x)α−1dx. (8) where B(α)denotes the Beta function. This U-shaped distribution (with α < 1) assigns higher probability density to boundary values of x. This sampling strategy ensures that the initial feature vectors φ0contain extreme parameter values with high likelihood, intentionally reserving intermediate feature combinations for
https://arxiv.org/abs/2505.22086v1
exploration during subsequent refinement stages. The uniform random sampling (RS) strategy generates independent feature vectors φ0by selecting each parameter φj(forj= 1, ..., d dimensions) from a uniform distribution. Latin Hypercube Sampling (LHS) enforces stratified spatial coverage in d-dimensional space through orthogonal permutation: φ0 ij=πj(i)−uij n, u ij∼ U(0,1) (9) where πj(·)denotes a random permutation function for the j-th dimension, and uij∼ U(0,1)denotes uniformly distributed stochastic offsets within hypercube cells. Seed Directive Generation approach ( Warm-Start ) integrates a generative model πθ(·), a prompt P encoding task constraints, and prior knowledge of hardware optimization K. Its sampling mechanism is governed by: φ0 ii.i.d.∼πθ(· | P,K) (10) 27 where φidenotes a d-dimensional feature vector. LLMs achieve superior efficiency by integrating domain knowledge and constrained optimization principles, outperforming other sampling baselines. D.4 Analysis of LLM Performance in DSE In this section, we provide additional experimental results to elucidate LLM performance during thePreprocessing ,Warm-Start , and Adaptive Optimization phases of DSE tasks. We also conduct extensive comparisons with current state-of-the-art general-purpose LLMs. /uni0000004a/uni00000048/uni00000050/uni00000050 /uni00000056/uni00000053/uni00000050/uni00000059 /uni00000056/uni00000057/uni00000048/uni00000051/uni00000046/uni0000004c/uni0000004f/uni00000016/uni00000047 /uni00000059/uni0000004c/uni00000057/uni00000048/uni00000055/uni00000045/uni0000004c /uni00000057/uni00000052/uni00000057/uni00000044/uni0000004f/uni00000013/uni00000015/uni00000013/uni00000017/uni00000013/uni00000019/uni00000013/uni00000031/uni00000058/uni00000050/uni00000003/uni00000052/uni00000049/uni00000003/uni0000002c/uni00000051/uni00000059/uni00000044/uni0000004f/uni0000004c/uni00000047/uni00000003/uni00000027/uni00000048/uni00000056/uni0000004c/uni0000004a/uni00000051/uni00000056/uni0000005a/uni00000012/uni00000052/uni00000003/uni00000033/uni00000055/uni00000058/uni00000051/uni0000004c/uni00000051/uni0000004a /uni0000005a/uni00000012/uni00000003/uni00000033/uni00000055/uni00000058/uni00000051/uni0000004c/uni00000051/uni0000004a /uni00000013/uni00000018/uni00000013/uni00000014/uni00000013/uni00000013/uni00000014/uni00000018/uni00000013/uni00000015/uni00000013/uni00000013 100%89.9%83.1% Figure 15: Comparison of invalid design counts with and without Feature-Driven PruningWe provide supplementary data for Figure 6 in our ablation study. To validate the effective- ness of our Feature-Driven Pruning method, we compared the design space dimension in the Adaptive Optimization phase with and without pruning. We recorded the number of invalid de- signs that failed due to synthesis timeout or fail- ure caused by scheduling excessive parallelism directives, as shown in Figure 15. The experi- mental results demonstrate that for benchmarks with larger design spaces and nested loops with high trip counts (gemm, spmv, stencil3d, viterbi) , thew/o Pruning approach encountered numerous invalid designs. This occurs because existing LLMs struggle to capture the semantics of complex HLS DSE tasks and lack specialized hardware optimiza- tion knowledge. Consequently, they fail to establish awareness of QoR feedback during exploration, becoming lost in the vast design space. In contrast, constraining exploration to the pruned design space allows LLMs to explore directive parallelism within a restricted feasible region. This approach prevents wasted computational resources resulting from aggressive parallelism allocation or redundant directive configurations that lead to failed QoR evaluations of candidates. Moreover, our experimental results indicate that this mild pruning strategy not only significantly reduces invalid designs but also maintains excellent DSE performance, as evidenced by further improvements in Figure 6. /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013/uni00000013/uni00000011/uni00000014/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000016/uni00000013/uni00000011/uni00000017/uni00000044/uni00000057/uni00000044/uni0000005b/uni0000002c/uni00000051/uni0000004c/uni00000057/uni0000004c/uni00000044/uni0000004f/uni00000003/uni00000003/uni00000003/uni00000036/uni00000044/uni00000050/uni00000053/uni0000004f/uni0000004c/uni00000051/uni0000004a /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000045/uni0000004c/uni00000046/uni0000004a/uni0000002c/uni00000051/uni0000004c/uni00000057/uni0000004c/uni00000044/uni0000004f/uni00000003/uni00000003/uni00000003/uni00000036/uni00000044/uni00000050/uni00000053/uni0000004f/uni0000004c/uni00000051/uni0000004a /uni00000013 /uni00000018/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000016/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000018/uni00000013/uni00000011/uni00000019/uni0000004a/uni00000048/uni00000050/uni00000050/uni0000002c/uni00000051/uni0000004c/uni00000057/uni0000004c/uni00000044/uni0000004f/uni00000003/uni00000003/uni00000003/uni00000036/uni00000044/uni00000050/uni00000053/uni0000004f/uni0000004c/uni00000051/uni0000004a /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni0000004a/uni00000048/uni00000056/uni00000058/uni00000050/uni00000050/uni00000059/uni0000002c/uni00000051/uni0000004c/uni00000057/uni0000004c/uni00000044/uni0000004f/uni00000003/uni00000003/uni00000003/uni00000036/uni00000044/uni00000050/uni00000053/uni0000004f/uni0000004c/uni00000051/uni0000004a /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000018/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001a/uni00000050/uni00000059/uni00000057/uni0000002c/uni00000051/uni0000004c/uni00000057/uni0000004c/uni00000044/uni0000004f/uni00000003/uni00000003/uni00000003/uni00000036/uni00000044/uni00000050/uni00000053/uni0000004f/uni0000004c/uni00000051/uni0000004a /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013/uni00000013/uni00000011/uni00000013/uni00000014/uni00000013/uni00000013/uni00000011/uni00000013/uni00000014/uni00000018/uni00000013/uni00000011/uni00000013/uni00000015/uni00000013/uni00000050/uni00000047/uni00000010/uni0000004e/uni00000051/uni00000051/uni0000002c/uni00000051/uni0000004c/uni00000057/uni0000004c/uni00000044/uni0000004f/uni00000003/uni00000003/uni00000003/uni00000036/uni00000044/uni00000050/uni00000053/uni0000004f/uni0000004c/uni00000051/uni0000004a /uni00000013 /uni00000015/uni00000013 /uni00000017/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000056/uni00000053/uni00000050/uni00000059/uni0000002c/uni00000051/uni0000004c/uni00000057/uni0000004c/uni00000044/uni0000004f/uni00000003/uni00000003/uni00000003/uni00000036/uni00000044/uni00000050/uni00000053/uni0000004f/uni0000004c/uni00000051/uni0000004a /uni00000013 /uni00000018/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000056/uni00000057/uni00000048/uni00000051/uni00000046/uni0000004c/uni0000004f/uni00000015/uni00000047/uni0000002c/uni00000051/uni0000004c/uni00000057/uni0000004c/uni00000044/uni0000004f/uni00000003/uni00000003/uni00000003/uni00000036/uni00000044/uni00000050/uni00000053/uni0000004f/uni0000004c/uni00000051/uni0000004a /uni00000013 /uni00000018/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000016/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000018/uni00000056/uni00000057/uni00000048/uni00000051/uni00000046/uni0000004c/uni0000004f/uni00000016/uni00000047/uni0000002c/uni00000051/uni0000004c/uni00000057/uni0000004c/uni00000044/uni0000004f/uni00000003/uni00000003/uni00000003/uni00000036/uni00000044/uni00000050/uni00000053/uni0000004f/uni0000004c/uni00000051/uni0000004a /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013/uni00000013/uni00000011/uni00000013/uni00000013/uni00000013/uni00000011/uni00000013/uni00000018/uni00000013/uni00000011/uni00000014/uni00000013/uni00000013/uni00000011/uni00000014/uni00000018/uni00000059/uni0000004c/uni00000057/uni00000048/uni00000055/uni00000045/uni0000004c/uni0000002c/uni00000051/uni0000004c/uni00000057/uni0000004c/uni00000044/uni0000004f/uni00000003/uni00000003/uni00000003/uni00000036/uni00000044/uni00000050/uni00000053/uni0000004f/uni0000004c/uni00000051/uni0000004a /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013/uni00000013/uni00000011/uni00000014/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000016/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000018/uni00000056/uni0000004b/uni00000044/uni0000002c/uni00000051/uni0000004c/uni00000057/uni0000004c/uni00000044/uni0000004f/uni00000003/uni00000003/uni00000003/uni00000036/uni00000044/uni00000050/uni00000053/uni0000004f/uni0000004c/uni00000051/uni0000004a /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013/uni00000013/uni00000011/uni00000013/uni00000018/uni00000013/uni00000011/uni00000014/uni00000013/uni00000013/uni00000011/uni00000014/uni00000018/uni00000013/uni00000011/uni00000015/uni00000013/uni00000044/uni00000058/uni00000057/uni00000052/uni00000046/uni00000052/uni00000055/uni00000055/uni0000002c/uni00000051/uni0000004c/uni00000057/uni0000004c/uni00000044/uni0000004f/uni00000003/uni00000003/uni00000003/uni00000036/uni00000044/uni00000050/uni00000053/uni0000004f/uni0000004c/uni00000051/uni0000004a/uni00000030/uni00000048/uni00000044/uni00000051/uni00000003/uni00000024/uni00000027/uni00000035/uni00000036 /uni00000006/uni00000003/uni00000052/uni00000049/uni00000003/uni00000036/uni0000005c/uni00000051/uni00000057/uni0000004b/uni00000048/uni00000056/uni0000004c/uni00000056/uni00000030/uni00000048/uni00000044/uni00000051/uni00000003/uni00000024/uni00000027/uni00000035/uni00000036 /uni00000006/uni00000003/uni00000052/uni00000049/uni00000003/uni00000036/uni0000005c/uni00000051/uni00000057/uni0000004b/uni00000048/uni00000056/uni0000004c/uni00000056/uni00000030/uni00000048/uni00000044/uni00000051/uni00000003/uni00000024/uni00000027/uni00000035/uni00000036 /uni00000006/uni00000003/uni00000052/uni00000049/uni00000003/uni00000036/uni0000005c/uni00000051/uni00000057/uni0000004b/uni00000048/uni00000056/uni0000004c/uni00000056/uni00000030/uni00000048/uni00000044/uni00000051/uni00000003/uni00000024/uni00000027/uni00000035/uni00000036 /uni00000006/uni00000003/uni00000052/uni00000049/uni00000003/uni00000036/uni0000005c/uni00000051/uni00000057/uni0000004b/uni00000048/uni00000056/uni0000004c/uni00000056/uni00000030/uni00000048/uni00000044/uni00000051/uni00000003/uni00000024/uni00000027/uni00000035/uni00000036 /uni00000006/uni00000003/uni00000052/uni00000049/uni00000003/uni00000036/uni0000005c/uni00000051/uni00000057/uni0000004b/uni00000048/uni00000056/uni0000004c/uni00000056/uni00000030/uni00000048/uni00000044/uni00000051/uni00000003/uni00000024/uni00000027/uni00000035/uni00000036 /uni00000006/uni00000003/uni00000052/uni00000049/uni00000003/uni00000036/uni0000005c/uni00000051/uni00000057/uni0000004b/uni00000048/uni00000056/uni0000004c/uni00000056/uni00000030/uni00000048/uni00000044/uni00000051/uni00000003/uni00000024/uni00000027/uni00000035/uni00000036 /uni00000006/uni00000003/uni00000052/uni00000049/uni00000003/uni00000036/uni0000005c/uni00000051/uni00000057/uni0000004b/uni00000048/uni00000056/uni0000004c/uni00000056/uni00000030/uni00000048/uni00000044/uni00000051/uni00000003/uni00000024/uni00000027/uni00000035/uni00000036 /uni00000006/uni00000003/uni00000052/uni00000049/uni00000003/uni00000036/uni0000005c/uni00000051/uni00000057/uni0000004b/uni00000048/uni00000056/uni0000004c/uni00000056/uni00000030/uni00000048/uni00000044/uni00000051/uni00000003/uni00000024/uni00000027/uni00000035/uni00000036 /uni00000006/uni00000003/uni00000052/uni00000049/uni00000003/uni00000036/uni0000005c/uni00000051/uni00000057/uni0000004b/uni00000048/uni00000056/uni0000004c/uni00000056/uni00000030/uni00000048/uni00000044/uni00000051/uni00000003/uni00000024/uni00000027/uni00000035/uni00000036 /uni00000006/uni00000003/uni00000052/uni00000049/uni00000003/uni00000036/uni0000005c/uni00000051/uni00000057/uni0000004b/uni00000048/uni00000056/uni0000004c/uni00000056/uni00000030/uni00000048/uni00000044/uni00000051/uni00000003/uni00000024/uni00000027/uni00000035/uni00000036 /uni00000006/uni00000003/uni00000052/uni00000049/uni00000003/uni00000036/uni0000005c/uni00000051/uni00000057/uni0000004b/uni00000048/uni00000056/uni0000004c/uni00000056/uni00000030/uni00000048/uni00000044/uni00000051/uni00000003/uni00000024/uni00000027/uni00000035/uni00000036 /uni00000006/uni00000003/uni00000052/uni00000049/uni00000003/uni00000036/uni0000005c/uni00000051/uni00000057/uni0000004b/uni00000048/uni00000056/uni0000004c/uni00000056/uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000035/uni00000014 /uni00000026/uni0000004f/uni00000044/uni00000058/uni00000047/uni00000048/uni00000003/uni00000016/uni00000011/uni0000001a/uni00000010/uni00000036/uni00000052/uni00000051/uni00000051/uni00000048/uni00000057 /uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000003/uni00000015/uni00000011/uni00000018/uni00000003/uni00000033/uni00000055/uni00000052 /uni0000002a/uni00000033/uni00000037/uni00000010/uni00000017/uni00000011/uni00000014 /uni0000002a/uni00000055/uni00000052/uni0000004e/uni00000010/uni00000016 /uni00000052/uni00000014/uni00000010/uni00000053/uni00000055/uni00000048/uni00000059/uni0000004c/uni00000048/uni0000005a Figure 16: Comparison of ADRS among MOEA/D-based DSE method under different LLMs. 28 We validated our approach across multiple inference LLMs, comparing various general-purpose LLMs including DeepSeek-R1 ,Claude 3.7-Sonnet ,Gemini 2.5 Pro ,GPT-4.1 ,Grok-3 , and o1-preview for DSE. As training scales expand and methodologies evolve, LLMs show increasing promise for driving effective DSE. Importantly, we believe that the vast HLS design space prevents LLMs from memorizing optimal directive configurations during pre-training. Instead, LLMs allocate
https://arxiv.org/abs/2505.22086v1
these configurations based on learned hardware optimization principles, preserving the flexibility and generalization of iDSE. /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000016/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000018/uni00000044/uni00000057/uni00000044/uni0000005b/uni0000002c/uni00000051/uni0000004c/uni00000057/uni0000004c/uni00000044/uni0000004f/uni00000003/uni00000003/uni00000003/uni00000036/uni00000044/uni00000050/uni00000053/uni0000004f/uni0000004c/uni00000051/uni0000004a /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000045/uni0000004c/uni00000046/uni0000004a/uni0000002c/uni00000051/uni0000004c/uni00000057/uni0000004c/uni00000044/uni0000004f/uni00000003/uni00000003/uni00000003/uni00000036/uni00000044/uni00000050/uni00000053/uni0000004f/uni0000004c/uni00000051/uni0000004a /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013/uni00000013/uni00000011/uni00000018/uni00000013/uni00000011/uni00000019/uni0000004a/uni00000048/uni00000050/uni00000050/uni0000002c/uni00000051/uni0000004c/uni00000057/uni0000004c/uni00000044/uni0000004f/uni00000003/uni00000003/uni00000003/uni00000036/uni00000044/uni00000050/uni00000053/uni0000004f/uni0000004c/uni00000051/uni0000004a /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni0000004a/uni00000048/uni00000056/uni00000058/uni00000050/uni00000050/uni00000059/uni0000002c/uni00000051/uni0000004c/uni00000057/uni0000004c/uni00000044/uni0000004f/uni00000003/uni00000003/uni00000003/uni00000036/uni00000044/uni00000050/uni00000053/uni0000004f/uni0000004c/uni00000051/uni0000004a /uni00000013 /uni00000018/uni00000013/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000050/uni00000059/uni00000057/uni0000002c/uni00000051/uni0000004c/uni00000057/uni0000004c/uni00000044/uni0000004f/uni00000003/uni00000003/uni00000003/uni00000036/uni00000044/uni00000050/uni00000053/uni0000004f/uni0000004c/uni00000051/uni0000004a /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013/uni00000013/uni00000011/uni00000013/uni00000013/uni00000019/uni00000013/uni00000011/uni00000013/uni00000013/uni0000001b/uni00000013/uni00000011/uni00000013/uni00000014/uni00000013/uni00000013/uni00000011/uni00000013/uni00000014/uni00000015/uni00000050/uni00000047/uni00000010/uni0000004e/uni00000051/uni00000051/uni0000002c/uni00000051/uni0000004c/uni00000057/uni0000004c/uni00000044/uni0000004f/uni00000003/uni00000003/uni00000003/uni00000036/uni00000044/uni00000050/uni00000053/uni0000004f/uni0000004c/uni00000051/uni0000004a /uni00000013 /uni00000018/uni00000013/uni00000013/uni00000011/uni00000013/uni00000019/uni00000013/uni00000011/uni00000013/uni0000001b/uni00000013/uni00000011/uni00000014/uni00000013/uni00000056/uni00000053/uni00000050/uni00000059/uni0000002c/uni00000051/uni0000004c/uni00000057/uni0000004c/uni00000044/uni0000004f/uni00000003/uni00000003/uni00000003/uni00000036/uni00000044/uni00000050/uni00000053/uni0000004f/uni0000004c/uni00000051/uni0000004a /uni00000013 /uni00000018/uni00000013/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000056/uni00000057/uni00000048/uni00000051/uni00000046/uni0000004c/uni0000004f/uni00000015/uni00000047/uni0000002c/uni00000051/uni0000004c/uni00000057/uni0000004c/uni00000044/uni0000004f/uni00000003/uni00000003/uni00000003/uni00000036/uni00000044/uni00000050/uni00000053/uni0000004f/uni0000004c/uni00000051/uni0000004a /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000056/uni00000057/uni00000048/uni00000051/uni00000046/uni0000004c/uni0000004f/uni00000016/uni00000047/uni0000002c/uni00000051/uni0000004c/uni00000057/uni0000004c/uni00000044/uni0000004f/uni00000003/uni00000003/uni00000003/uni00000036/uni00000044/uni00000050/uni00000053/uni0000004f/uni0000004c/uni00000051/uni0000004a /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013/uni00000013/uni00000011/uni00000013/uni00000013/uni00000013/uni00000011/uni00000013/uni00000018/uni00000013/uni00000011/uni00000014/uni00000013/uni00000059/uni0000004c/uni00000057/uni00000048/uni00000055/uni00000045/uni0000004c/uni0000002c/uni00000051/uni0000004c/uni00000057/uni0000004c/uni00000044/uni0000004f/uni00000003/uni00000003/uni00000003/uni00000036/uni00000044/uni00000050/uni00000053/uni0000004f/uni0000004c/uni00000051/uni0000004a /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013/uni00000013/uni00000011/uni00000018/uni00000014/uni00000011/uni00000013/uni00000014/uni00000011/uni00000018/uni00000056/uni0000004b/uni00000044/uni0000002c/uni00000051/uni0000004c/uni00000057/uni0000004c/uni00000044/uni0000004f/uni00000003/uni00000003/uni00000003/uni00000036/uni00000044/uni00000050/uni00000053/uni0000004f/uni0000004c/uni00000051/uni0000004a /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013/uni00000013/uni00000011/uni00000014/uni00000013/uni00000013/uni00000013/uni00000011/uni00000014/uni00000015/uni00000018/uni00000013/uni00000011/uni00000014/uni00000018/uni00000013/uni00000013/uni00000011/uni00000014/uni0000001a/uni00000018/uni00000013/uni00000011/uni00000015/uni00000013/uni00000013/uni00000044/uni00000058/uni00000057/uni00000052/uni00000046/uni00000052/uni00000055/uni00000055/uni0000002c/uni00000051/uni0000004c/uni00000057/uni0000004c/uni00000044/uni0000004f/uni00000003/uni00000003/uni00000003/uni00000036/uni00000044/uni00000050/uni00000053/uni0000004f/uni0000004c/uni00000051/uni0000004a/uni00000030/uni00000048/uni00000044/uni00000051/uni00000003/uni00000024/uni00000027/uni00000035/uni00000036 /uni00000006/uni00000003/uni00000052/uni00000049/uni00000003/uni00000036/uni0000005c/uni00000051/uni00000057/uni0000004b/uni00000048/uni00000056/uni0000004c/uni00000056/uni00000030/uni00000048/uni00000044/uni00000051/uni00000003/uni00000024/uni00000027/uni00000035/uni00000036 /uni00000006/uni00000003/uni00000052/uni00000049/uni00000003/uni00000036/uni0000005c/uni00000051/uni00000057/uni0000004b/uni00000048/uni00000056/uni0000004c/uni00000056/uni00000030/uni00000048/uni00000044/uni00000051/uni00000003/uni00000024/uni00000027/uni00000035/uni00000036 /uni00000006/uni00000003/uni00000052/uni00000049/uni00000003/uni00000036/uni0000005c/uni00000051/uni00000057/uni0000004b/uni00000048/uni00000056/uni0000004c/uni00000056/uni00000030/uni00000048/uni00000044/uni00000051/uni00000003/uni00000024/uni00000027/uni00000035/uni00000036 /uni00000006/uni00000003/uni00000052/uni00000049/uni00000003/uni00000036/uni0000005c/uni00000051/uni00000057/uni0000004b/uni00000048/uni00000056/uni0000004c/uni00000056/uni00000030/uni00000048/uni00000044/uni00000051/uni00000003/uni00000024/uni00000027/uni00000035/uni00000036 /uni00000006/uni00000003/uni00000052/uni00000049/uni00000003/uni00000036/uni0000005c/uni00000051/uni00000057/uni0000004b/uni00000048/uni00000056/uni0000004c/uni00000056/uni00000030/uni00000048/uni00000044/uni00000051/uni00000003/uni00000024/uni00000027/uni00000035/uni00000036 /uni00000006/uni00000003/uni00000052/uni00000049/uni00000003/uni00000036/uni0000005c/uni00000051/uni00000057/uni0000004b/uni00000048/uni00000056/uni0000004c/uni00000056/uni00000030/uni00000048/uni00000044/uni00000051/uni00000003/uni00000024/uni00000027/uni00000035/uni00000036 /uni00000006/uni00000003/uni00000052/uni00000049/uni00000003/uni00000036/uni0000005c/uni00000051/uni00000057/uni0000004b/uni00000048/uni00000056/uni0000004c/uni00000056/uni00000030/uni00000048/uni00000044/uni00000051/uni00000003/uni00000024/uni00000027/uni00000035/uni00000036 /uni00000006/uni00000003/uni00000052/uni00000049/uni00000003/uni00000036/uni0000005c/uni00000051/uni00000057/uni0000004b/uni00000048/uni00000056/uni0000004c/uni00000056/uni00000030/uni00000048/uni00000044/uni00000051/uni00000003/uni00000024/uni00000027/uni00000035/uni00000036 /uni00000006/uni00000003/uni00000052/uni00000049/uni00000003/uni00000036/uni0000005c/uni00000051/uni00000057/uni0000004b/uni00000048/uni00000056/uni0000004c/uni00000056/uni00000030/uni00000048/uni00000044/uni00000051/uni00000003/uni00000024/uni00000027/uni00000035/uni00000036 /uni00000006/uni00000003/uni00000052/uni00000049/uni00000003/uni00000036/uni0000005c/uni00000051/uni00000057/uni0000004b/uni00000048/uni00000056/uni0000004c/uni00000056/uni00000030/uni00000048/uni00000044/uni00000051/uni00000003/uni00000024/uni00000027/uni00000035/uni00000036 /uni00000006/uni00000003/uni00000052/uni00000049/uni00000003/uni00000036/uni0000005c/uni00000051/uni00000057/uni0000004b/uni00000048/uni00000056/uni0000004c/uni00000056/uni00000030/uni00000048/uni00000044/uni00000051/uni00000003/uni00000024/uni00000027/uni00000035/uni00000036 /uni00000006/uni00000003/uni00000052/uni00000049/uni00000003/uni00000036/uni0000005c/uni00000051/uni00000057/uni0000004b/uni00000048/uni00000056/uni0000004c/uni00000056/uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000035/uni00000014 /uni00000026/uni0000004f/uni00000044/uni00000058/uni00000047/uni00000048/uni00000003/uni00000016/uni00000011/uni0000001a/uni00000010/uni00000036/uni00000052/uni00000051/uni00000051/uni00000048/uni00000057 /uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c/uni00000003/uni00000015/uni00000011/uni00000018/uni00000003/uni00000033/uni00000055/uni00000052 /uni0000002a/uni00000033/uni00000037/uni00000010/uni00000017/uni00000011/uni00000014 /uni0000002a/uni00000055/uni00000052/uni0000004e/uni00000010/uni00000016 /uni00000052/uni00000014/uni00000010/uni00000053/uni00000055/uni00000048/uni00000059/uni0000004c/uni00000048/uni0000005a Figure 17: Comparison of ADRS among NSGA-II-based DSE method under different LLMs. We compared different general-purpose LLMs for initial sampling, followed by EA-based DSE methods implemented with MOEA/D (Figure 16) and NSGA-II (Figure 17) for searching new directive configurations. Our experiments revealed significant variations in both convergence speed and final optimization quality across different LLMs. Notably, we found that no single model consistently outperformed others across all benchmarks, indicating that LLMs exhibit different optimization reasoning patterns depending on the HLS design structure and design space dimension. Figure 18 compares the t-SNE visualization of feature vectors from Pareto-optimal directive con- figurations sampled through exhaustive exploration (approximately 10,000 explored designs per benchmark) against those identified by our method during both initial sampling and search phases. t-SNE is a dimensionality reduction technique that represents high-dimensional data as points in a two-dimensional space, where proximity between points indicates similarity in the data, while greater distances suggest dissimilarity. The visualization reveals that the design space formed by HLS optimization directives along with their parameters and combinations is exponentially large, making feature differentiation challenging and resulting in irregular visualization patterns. Furthermore, most HLS designs exhibit clustering characteristics among Pareto-optimal directive configurations. Our Warm-Start strategy effectively occupies these optimal design regions, thereby guiding subsequent Adaptive Optimization phases toward broader coverage. Comparative analysis with Figure 4 demon- strates that while our method may not comprehensively cover the extensive Pareto-optimal designs identified through exhaustive exploration under limited search budgets, it successfully constructs im- pressive Pareto fronts in the objective space (latency-resource utilization). This approach effectively satisfies design requirements while significantly reducing exploration overhead. 29 −50 0 50−50050atax −50 0 50−50050bicg −100 0−1000100gemm −50 0 50−50050gesummv −50 0 50−50050mvt −50 0 50−50050md-knn −50 0 50−50050spmv −100 0 100−1000100stencil2d −50 0 50−50050stencil3d −100 0−50050viterbi −100 0 100−1000100autocorr −100 0 100−1000100shat-SNE Dimension 2 t-SNE Dimension 1Reference Designs Reference Pareto-Optimal DesignsExplored Designs during Warm-Start Explored Designs during Adaptive OptimizationFigure 18: t-SNE visualization of reference Pareto-optimal and explored optimization directives in Warm-Start andAdaptive Optimization stages D.5 Information of Assets We provide all results of our experiments in our double-blind review repository, and we plan to upload our complete code for public after publication. We present the information of assets as below: 1. Code • HGBO-DSE [31] –License: MIT license. –URL: https://github.com/hzkuang/HGBO-DSE • Lattice [21] –License: Available online. –URL: http://www.inf.usi.ch/phd/ferretti/lattice-traversing-DSE.html 2. Dataset • PolyBench [90] –License: Ohio State University Software Distribution License. –URL: http://polybench.sf.net. • CHStone [91] –License: Available online. –URL: https://github.com/ferrandi/CHStone • MachSuite [92] –License: MachSuite BSD-3 license. –URL: https://github.com/breagen/MachSuite 30 E Limitations and Future Work One limitation of our work is that LLMs cannot identify all ground-truth Pareto-optimal designs under finite search budgets, while exhaustive exploration remains impractical due to the vast design space. However, our experimental results demonstrate
https://arxiv.org/abs/2505.22086v1
that iDSE significantly outperforms heuristic- based DSE methods, discovering Pareto fronts that, although potentially less diverse, are sufficiently impressive to satisfy multifaceted optimization preferences. Relaxing the maximum iteration limit would allow convergence toward a wider spectrum of Pareto-optimal designs at increased time cost. However, the effectiveness of LLMs may diminish with expanding exploration scales due to their limitations in handling long contexts. Our observations indicate that representative initial sampling designs generated by the LLM effectively enhance subsequent traditional evolutionary algorithms. Therefore, our method could be further improved by focusing on providing broader initial sampling designs combined with traditional DSE methods, thereby leveraging their established advantages in long-term hardware optimization. Additionally, despite subsequent refinement, a modest number of low-quality directive configurations in the initial sampling still result in wasted evaluation time of the vendor HLS tool. We also observed that different general-purpose LLMs exhibit varying performance across benchmarks, with no single model consistently outperforming others, potentially complicating model selection for designers. A promising solution involves developing specialized models through targeted training, thus enhancing the scalability of the method, which we consider as an improvement direction for future work in iDSE. Furthermore, our iDSE framework can also serve as a data augmentation method for collecting high-quality datasets in DSE to drive future model training. Despite these limitations, we believe that iDSE represents pioneering work which provides valuable insights into the future of processor-level design space exploration, motivates efforts to apply generative AI to hardware optimization, and pushes the boundary of electronic design automation. 31
https://arxiv.org/abs/2505.22086v1
arXiv:2505.22087v1 [cs.AI] 28 May 2025Cognitively-Inspired Emergent Communication via Knowledge Graphs for Assisting the Visually Impaired Ruxiao Chen1, Dezheng Han2, Wenjie Han2, Shuaishuai Guo2 1Johns Hopkins University2Shandong University rchen117@jh.edu, shuaishuai_guo@sdu.edu.cn Abstract Assistive systems for visually impaired indi- viduals must deliver rapid, interpretable, and adaptive feedback to facilitate real-time nav- igation. Current approaches face a trade-off between latency and semantic richness: natural language-based systems provide detailed guid- ance but are too slow for dynamic scenarios, while emergent communication frameworks of- fer low-latency symbolic languages but lack semantic depth, limiting their utility in tac- tile modalities like vibration. To address these limitations, we introduce a novel framework, Cognitively-Inspired Emergent Communica- tion via Knowledge Graphs (V AG-EC), which emulates human visual perception and cogni- tive mapping. Our method constructs knowl- edge graphs to represent objects and their rela- tionships, incorporating attention mechanisms to prioritize task-relevant entities, thereby mir- roring human selective attention. This struc- tured approach enables the emergence of com- pact, interpretable, and context-sensitive sym- bolic languages. Extensive experiments across varying vocabulary sizes and message lengths demonstrate that V AG-EC outperforms tradi- tional emergent communication methods in To- pographic Similarity (TopSim) and Context In- dependence (CI). These findings underscore the potential of cognitively grounded emergent communication as a fast, adaptive, and human- aligned solution for real-time assistive technolo- gies. Code is available at https://github. com/RuxiaoChen/VAG-EC/tree/main . 1 Introduction According to the World Health Organization, as of 2021, approximately 220 million people world- wide are blind or visually impaired, accounting for about 3%of the global population (World Health Organization, 2023). This significantly affects their ability to perform basic daily activities such as nav- igation, eating, and personal care, often resulting in increased dependence on caregivers and a loss ofprivacy and autonomy. In response, a wide range of artificial intelligence (AI)-powered wearable as- sistive systems have been developed to enhance the independence of visually impaired individu- als. These systems generally fall into two cate- gories: natural language-based communication and discrete learned symbolic signaling frameworks. Natural language-based designs leverage ad- vances in computer vision, natural language pro- cessing (NLP), and reinforcement learning to enable AI agents to provide spoken guidance grounded in visual context. This approach has been widely explored within the domain of vision- language navigation (VLN), where agents generate natural-language instructions to support user navi- gation through complex environments (Anderson et al., 2018; Kurita and Cho, 2020; Weiss et al., 2019). While natural language offers high expres- siveness and a low barrier to user understanding, it suffers from high latency, particularly in fast- changing scenarios. For example, in a situation where a bicycle rapidly approaches a visually im- paired user, verbal communication may take sev- eral seconds, rendering it ineffective for timely hazard avoidance. This limitation has motivated exploration into alternative paradigms capable of enabling faster and more compact communication. One promising alternative is Emergent Commu- nication (EC), wherein artificial agents develop dis- crete symbolic protocols through interactive learn- ing. Compared to natural language, EC messages are compact, low-latency, and well-suited for real- time assistive interactions. However, most existing EC approaches operate directly on unstructured visual
https://arxiv.org/abs/2505.22087v1
inputs or high-dimensional embeddings, fail- ing to leverage structured, cognitively informed representations (Mu and Goodman, 2021). This limits their semantic alignment with how humans process visual scenes—by segmenting them into entities and reasoning over inter-object relation- ships (Biederman, 1987; Teney et al., 2017). More- over, traditional EC frameworks often treat all input elements uniformly, lacking attention mechanisms to prioritize task-relevant information. In contrast, human perception relies heavily on selective atten- tion—the dynamic allocation of focus to salient objects based on contextual relevance. Without such mechanisms, EC agents may produce ambigu- ous or overly verbose messages, which is especially problematic for constrained output channels such as vibration or haptic feedback(Zhou et al., 2024; Conklin and Smith, 2023). Human spatial understanding is further grounded in the construction of cognitive maps —internal representations that organize the environment into meaningful entities and their relationships, en- abling efficient reasoning and navigation (Epstein et al., 2017; Ishikawa, 2021). For individuals without access to visual input, constructing such maps depends on external systems that can extract and prioritize key environmental features to guide decision-making. To address these challenges, we propose the V AG-EC (Visual Attention Graph-based Emergent Communication ) framework, a cognitively inspired communication paradigm that combines structured semantic abstraction with attention-driven informa- tion filtering. V AG-EC first transforms a visual scene into a knowledge graph that encodes object- level semantics along with spatial and functional relationships. This graph-based abstraction intro- duces human-aligned compositionality, enabling agents to reason over high-level semantic struc- tures. To emulate human selective attention, V AG- EC integrates an attention mechanism that scores nodes within the knowledge graph based on task relevance. This mechanism enables the agent to fo- cus on salient entities and encode their significance into symbolic messages via EC protocols. By fusing cognitive structure with attention- guided selection, V AG-EC facilitates the emer- gence of interpretable, efficient, and context- sensitive communication protocols. These proto- cols are trained using a referential game with graph- structured inputs and evaluated using standard EC metrics such as Topographic Similarity (TopSim) andContext Independence (CI), across varying vo- cabulary sizes and message lengths. Our contributions can be summarized as follows: •We introduce a compact and interpretable al- ternative to natural language that supports real- time interaction in assistive scenarios.•We propose a structured encoding of visual scenes using knowledge graphs to capture human-aligned semantics and object relations. •We incorporate task-driven attention mecha- nisms to prioritize salient graph nodes, en- suring that generated messages are concise, relevant, and semantically meaningful. 2 Related Work 2.1 Emergent Communication with Structured Semantics EC investigates how artificial agents can au- tonomously develop discrete symbolic languages through interactive learning, often within refer- ential or signaling game settings (Lewis, 1986). In these paradigms, a speaker observes a target stimulus and encodes it into a symbolic message, which a listener uses to infer the intended referent from among distractors. Communication proto- cols emerge through reinforcement or supervised learning, optimizing task performance via message exchange. While early EC studies operated on ab- stract symbolic inputs such as one-hot identifiers or handcrafted attribute vectors (Rita et al., 2022a), subsequent
https://arxiv.org/abs/2505.22087v1
work extended EC to richer modalities, incorporating raw images (Dessì et al., 2021), pixel- based features (Nikolaus, 2024), and multimodal embeddings (Lee, 2024). Recent efforts have sought to improve the com- positionality, interpretability, and generalization of EC protocols beyond mere architectural refine- ments. For example, Chaabouni et al. (Chaabouni et al., 2022) demonstrated that constraining the symbol space via vocabulary bottlenecks fosters the emergence of more compositional and transfer- able languages in multi-agent populations. Xu et al. (Xu et al., 2022) employed disentangled latent representations using β-V AE encoders to promote factorized semantic abstractions, enhancing zero- shot communication capabilities. In another direc- tion, Nikolaus et al. (Nikolaus, 2024) introduced mechanisms for repair and recovery in communi- cation, whereby agents iteratively refine degraded messages through conversational feedback, leading to greater resilience under noise. Despite these advances, most existing EC mod- els still operate on dense, unstructured feature vec- tors derived from convolutional neural networks, lacking any explicit encoding of entity-level se- mantics or inter-object relationships. Such repre- Figure 1: Pipeline for constructing a knowledge graph from a dining image. The input image is segmented using Segment Anything, followed by object extraction and feature encoding. Node attributes are derived from object embeddings, while edge attributes are computed based on spatial proximity, forming a structured graph representation of the scene. sentations fall short in capturing the relational and compositional nature of real-world scenes, leading to messages that often reflect superficial correla- tions rather than meaningful semantic distinctions. Moreover, the absence of attention mechanisms results in uniform treatment of all input elements, limiting the ability to prioritize task-relevant con- tent—a key facet of human communication, espe- cially under real-time constraints. 2.2 Graph-Based Visual Abstraction and Cognitive Grounding To bridge the gap menioned above, the vision com- munity has increasingly adopted graph-based repre- sentations to model structured relationships within visual inputs (Han et al., 2022; Munir et al., 2024; Liu et al., 2020; Munir et al., 2023). These models treat image patches or objects as graph nodes and represent spatial, functional, or contextual relations as edges, enabling the application of graph neural networks (GNNs) to process scene-level semantics. Such approaches inherently support non-local rea- soning, compositional abstraction, and relational inference—properties that are critical for symbolic communication and decision-making. For instance, ViG (Han et al., 2022) mod- els global interactions through fully connected image graphs, allowing node-level features to aggregate spatial dependencies via GNN layers. GreedyViG (Munir et al., 2024) builds on this by proposing an efficient graph construction mecha- nism using greedy axial connectivity, substantially reducing computational overhead while retaining critical structural information. These methods have demonstrated strong performance across standard benchmarks and point to the scalability and flexi- bility of graph-based encodings for high-resolution vision tasks.From a cognitive standpoint, such structured vi- sual abstractions resonate with the concept of cog- nitive maps —internalized mental representations that humans use to organize spatial environments and reason about object relations (Guelton, 2023). While most existing works have employed graph- based models primarily for visual recognition or segmentation (Webb et al., 2023; van Bergen et al., 2025), their alignment with
https://arxiv.org/abs/2505.22087v1
cognitive structures makes them particularly suitable for supporting symbolic reasoning and language emergence (Con- klin and Smith, 2023). In this work, we extend this line of inquiry by integrating knowledge graphs as structured inputs for EC, enabling agents to reason over entities and their relationships in a cognitively aligned manner. Furthermore, we incorporate attention-driven filter- ing mechanisms to simulate human selective focus, allowing the emergent messages to be both concise and semantically relevant. This fusion of cognitive grounding and attention prioritization addresses the limitations of previous EC systems, paving the way for robust, real-time symbolic communication in assistive settings. 3 Construction of Visual Cognitive Maps Human spatial cognition fundamentally relies on the formation of cognitive maps —internalized men- tal models that encapsulate the layout and rela- tionships among salient entities in the environ- ment (Ishikawa, 2021; Behrens et al., 2018). These maps are incrementally constructed and refined through multimodal sensory input and interaction, allowing individuals to navigate, plan actions, and make inferences about spatial structure. For in- stance, when seated at a dining table, a person rapidly constructs a mental representation of the location of plates, cutlery, and food items, updating it dynamically through visual or haptic feedback. For individuals with visual impairments, however, constructing such maps is substantially more diffi- cult, given the limited and often sequential nature of tactile information acquisition. In the context of assistive AI systems, the graph- theoretic abstraction of such cognitive maps pro- vides a principled and computationally tractable representation of structured visual environments. Inspired by human cognition, we model visual scenes as knowledge graphs , where nodes corre- spond to object instances and edges encode spatial or functional relationships. This structured repre- sentation serves as the basis for downstream reason- ing and communication tasks within our emergent communication framework. To extract object-level representations from raw visual input, we employ the Segment Anything Model (SAM) (Kirillov et al., 2023), a general- purpose, zero-shot segmentation model capable of identifying object boundaries based on low-level visual cues such as texture and edges. Despite its strong generalization ability, SAM lacks seman- tic grounding and often yields redundant or spu- rious segments. To mitigate this, we introduce a filtering pipeline based on a combination of heuris- tics—including object size, spatial prominence, and contextual relevance—to retain only meaning- ful and distinct instances. From the filtered set, we select the top- Nobjects to serve as graph nodes. Their visual features are encoded using a pretrained Convolutional Neural Network (CNN), which yields a fixed-dimensional embedding for each node. Pairwise spatial relation- ships are encoded by constructing edges between each node and its nnearest neighbors, based on the Euclidean distance between object centroids—a choice aligned with perceptual proximity priors in human cognition. Formally, the knowledge graph for image Iiis denoted as ci= (Vi, Ei), where Viis the set of selected object nodes and Eithe set of proximity- based edges. This graph is processed using a Graph Convolutional Network (GCN) (Pope et al., 2019), which iteratively aggregates and transforms node features by propagating information across local neighborhoods. The update rule at layer l+ 1is defined
https://arxiv.org/abs/2505.22087v1
as: Hl+1=σ(ˆD−1 2ˆAˆD−1 2HlWl), (1) where ˆA=A+Irepresents the adjacency matrixwith added self-loops, ˆDis its degree matrix, Wl is a learnable weight matrix, and σ(·)denotes a non-linear activation function. To emulate the human ability to selectively attend to task-relevant stimuli, we integrate an attention mechanism into the graph reasoning pipeline (Ri et al., 2023). Each node jis assigned a dynamic attention weight wjthat reflects its con- textual salience with respect to the task objective. The global representation of the graph, J, is then computed as an attention-weighted aggregation of node embeddings: J=g(ci) =X jwjhj, (2) where hjdenotes the feature vector of node j, and g(·)represents the complete feature extraction pro- cess encompassing the GCN and attention modules. This cognitively inspired representation pipeline enables our system to capture human-aligned com- positional structures while focusing on the most pertinent aspects of the environment. It thereby forms a robust foundation for downstream sym- bolic communication tasks that demand both inter- pretability and efficiency. 4 Visual-Attention Graph-based Emergent Communication (V AG-EC) 4.1 Preliminary Our framework builds on the classical Lewis sig- naling game (Lewis, 1986), a two-agent commu- nication protocol in which a speaker observes a world state and sends a discrete message to a lis- tener , who must infer the correct state from a set of distractors. A widely used extension of this game replaces symbolic states with natural images: the speaker is shown a target image, and the listener receives the same image along with distractors. The speaker emits a message, and the listener selects the can- didate that best matches it. While this grounds communication in perceptual input, most imple- mentations operate directly on raw pixels or CNN features, without any explicit structural representa- tion. As a result, EC agents often develop protocols based on low-level visual cues—such as color or texture—rather than semantically meaningful con- cepts (Han et al., 2022). This hinders both the interpretability of the emergent language and its ability to generalize across contexts. Segment Anything Reshape CNN GCN Attention Mechanism GRU Encoder c CNN GCNListenerDiscrete message: m Make Prediction𝑌𝐿=[0 1 0 0 0] ෡𝑌𝐿=[𝑠1 𝑠2 𝑠3 𝑠4 𝑠5]Calculate entropy loss and back propagate Node Feature Edge extraction GRU DecodeSpeaker Node Feature Knowledge GraphBlind PeopleHuman FeedbackListener Feedback PreprocessingFigure 2: Overview of the proposed V AG-EC framework. Visual scenes are segmented and converted into knowledge graphs, which are encoded by the speaker to generate discrete messages. The listener decodes the message to identify the correct scene, with human feedback guiding end-to-end optimization. From a cognitive perspective, this design di- verges from how humans process and reason about visual environments. Extensive research in psy- chology suggests that people construct cognitive maps —structured, graph-like internal representa- tions that encode objects and their interrelation- ships (Epstein et al., 2017; Li et al., 2023). These cognitive graphs serve as organizing schemata for navigating physical, social, and conceptual spaces, supporting generalization, abstraction, and flexible inference (Peer et al., 2021). Moreover, they en- code both observed and inferred relationships in a map-like space, enabling spatial reasoning beyond what is directly visible (Li et al., 2023). To align
https://arxiv.org/abs/2505.22087v1
emergent communication with these cognitive principles, we construct a visual cogni- tive graph for each image. Given an input image Ii, we extract a knowledge graph ci= (Vi, Ei), where Videnotes object nodes (obtained via SAM segmentation) and Eiencodes spatial or functional relationships between them. These graphs serve as structured, object-centric representations of the scene, allowing both the speaker and listener to reason over high-level semantics rather than raw visual patterns. 4.2 Game Setup Specifically, the speaker and listener independently process graphs via their respective graph feature ex- tractors, gs(·)andgl(·), implemented as attention- augmented GCNs. Given a graph ci,gs(ci)orgl(ci) produces a feature embedding summarizing its se- mantic content. During each communication round, the speakerobserves the target graph c0and encodes it into gs(c0). The listener receives a set of candi- date graphs {cs, c1, . . . , c n}(one target and n distractors) and processes them into embeddings {gl(c0), gl(c1), . . . , g l(cn)}. Communication proceeds as in the classical Lewis signalling framework: the speaker transmits a discrete message m, and the listener uses it to identify the target among the candidates based on the learned embedding space (Ohmer et al., 2022; Guo et al., 2019). 4.3 Speaker and Listener Both the speaker and listener are implemented as Gated Recurrent Units (GRUs) with separate pa- rameters (Ogunleye et al., 2024; Mu and Good- man, 2021). The speaker is parameterized by θand generates a discrete message m= (m1, . . . , m L), where each token mℓ∈ V andVis the shared vo- cabulary of size |V|. Given the graph-structured embedding gs(cs)of the target scene cs, the speaker samples a message distribution: p(m|cs) =fenc gs(cs);θ . (3) The listener, parameterized by ϕ, receives the sampled message mand, for each candidate scene ci, computes a matching score between the decoded message and the listener’s graph embedding gl(ci). The probability that ciis the target is given by: p(yi= 1|ci, m) =σ fdec(m;ϕ)⊤gl(ci) ,(4) where σ(·)denotes the logistic sigmoid function, andyi∈ {0,1}indicates whether ciis the true target ( yi= 1) or a distractor ( yi= 0). Training involves jointly optimizing the speaker and listener to maximize the likelihood of correct listener predictions. Let Tdenote the set of tar- gets, Sthe set of sampled messages, and Gthe set of graph embeddings for candidates. The loss function for a batch is: L(T, S, G ) =−X ilogp(yi|ci,ˆm),(5) where ˆm∼p(m|cs)is a message sampled from the speaker distribution conditioned on the target. 4.4 Differentiable Message Sampling Since the message m= (m1, . . . , m L)consists of discrete symbols from a vocabulary V, the sam- pling process in Eq. (3)is non-differentiable. This prevents direct optimization of the speaker via gradient-based methods such as SGD. To address this, we adopt the Gumbel–Softmax relaxation, which approximates discrete sampling using a continuous, differentiable distribution (Mu and Goodman, 2021; Carmeli et al., 2023). Specif- ically, let πℓ∈R|V|be the logits (unnormalized probabilities) output by the speaker’s GRU for po- sition ℓ, and let gℓk∼Gumbel (0,1)be i.i.d. noise variables. The relaxed message ˜mℓ∈R|V|at posi- tionℓis defined component-wise as: ˜mℓk=exp (logπℓk+gℓk)/τ P|V| j=1exp (logπℓj+gℓj)/τ,(6)
https://arxiv.org/abs/2505.22087v1
where τ > 0is the temperature parameter that controls the smoothness of the distribution. As τ→0, the soft sample ˜mℓapproaches a one-hot vector; as τincreases, the distribution becomes smoother. We set τ= 1 in our experiments to balance stability and approximation quality. During training, the soft messages ˜mℓare used as input to the listener’s decoder, allowing gradi- ents to propagate through the message generation process. At evaluation time, we replace ˜mℓwith its one-hot argmax vector, converting the soft message back into a discrete symbol from V. 5 Construction of Dataset Due to the absence of publicly available datasets fo- cused on dining scenarios, we construct a synthetic dataset for training and validation, while reserving real-world images exclusively for testing. To gener- ate diverse and semantically rich training data, we adopt a text-to-image diffusion model (Yang et al.,2023), which progressively refines random noise into high-quality images conditioned on textual prompts. We design a structured prompt library centered around three key categories: food,drink , and table- ware . Prompts are created by randomly sampling and combining elements from these categories, re- sulting in thousands of unique scene compositions. Each generated image is conditioned on a randomly selected prompt, enabling a wide variety of dining scene layouts. Importantly, diffusion models introduce inher- ent stochasticity—identical prompts yield visu- ally distinct images across different sampling runs (Nguyen et al., 2023; Reutov, 2023). This prop- erty allows us to generate diverse training samples while maintaining semantic consistency, helping reduce dataset bias and improve generalization. Further details on the prompt construction pro- cess, as well as example images from both syn- thetic and real-world datasets, are provided in Ap- pendix A. 6 Evaluation We evaluate the effectiveness of our framework by varying the vocabulary size V∈ {10,20,80} while fixing the message length L= 10 . This setup allows us to assess how well the model adapts to different communication bottlenecks and develops efficient symbolic protocols under constrained or relaxed settings. To quantify the semantic quality of the learned messages, we adopt two widely used metrics in emergent communication research: Context Inde- pendence (CI) and Topographic Similarity (Top- Sim) (Boldt and Mortensen, 2024). CI measures how consistently a given token refers to the same concept across different contexts, reflecting sym- bolic stability and compositionality (Kuci ´nski et al., 2021). TopSim evaluates how well the geometric structure of the message space aligns with that of the input space, capturing the degree to which sim- ilar inputs yield similar messages. These two met- rics offer complementary perspectives: CI focuses on token-level interpretability, while TopSim cap- tures global structural alignment (Chaabouni et al., 2020; Peters et al., 2025). Further implementation details are provided in Appendix B. To further characterize the symbolic structure of the emergent communication protocols, we ana- lyze three distributional properties: (1) token fre- 1 2 3 4 5 6 7 8 9 Rank050010001500200025003000Frequency (a) Zipf Distribution Comparison Original EC VAGEC Ideal Zipf 1 2 3 4 5 6 7 8 9 Number of unique tokens0.30.40.50.60.70.80.91.0Cumulative frequency90% coverage(b) Cumulative Distribution Original EC VAGEC 0 1 2 3 4 5 6 7
https://arxiv.org/abs/2505.22087v1
8 9 Token ID050010001500200025003000Frequency(c) Token Frequency Distribution Original EC VAGEC vocab size=10 vocab size=20 vocab size=80 Vocabulary Size0.00.20.40.60.81.01.2Value0.5650.6810.7220.7020.798 0.676 0.4650.683 0.6200.6520.839 0.734 0.1920.309 0.310 0.3090.3800.401(d) Performance Comparison: Original EC vs VAGEC Original EC - Accuracy VAGEC - AccuracyOriginal EC - TopSim VAGEC - TopSimOriginal EC - Context Independence VAGEC - Context IndependenceComprehensive Analysis of Original EC vs VAGEC AlgorithmsFigure 3: Quantitative comparison between the baseline EC and our proposed VAG-EC framework. (a)–(c) show token-level statistics: Zipf distribution, cumulative token coverage, and frequency histogram. (d) reports task-level performance across three metrics (Accuracy, TopSim, and Context Independence) under varying vocabulary sizes. quency distribution, (2) cumulative token cover- age, and (3) Zipfian behavior. These metrics pro- vide insight into symbolic efficiency, diversity, and naturalness. Token frequency distribution reveals whether the protocol suffers from token collapse or makes full use of the vocabulary (Chaabouni et al., 2019). Cumulative coverage measures how many tokens are needed to account for 90% of all mes- sage tokens, indicating expressive range (Ueda and Washio, 2021). Zipfian behavior, assessed via log– log rank-frequency plots, evaluates whether the emergent language follows the power-law regular- ity typical of natural languages (Ueda and Washio, 2021; Ueda and Taniguchi, 2024). 6.1 Comparison on Accuracy, TopSim and Context Independence Figure 3(d) is the comparison results of VAG-EC and EC baseline across all three evaluation metrics. For accuracy comparison, at low vocabulary sizes ( V=10,20), the baseline must reuse tokens for unrelated concepts, leading to ambiguity and degraded success. In contrast, VAG-EC builds a structured knowledge graph and focuses on goal-relevant relations (e.g., cup,next to← − − − forks ), allowing the speaker to produce compact yet unambiguous messages. This semantic compression drives the ac- curacy gains in constrained settings. At V=80, the baseline improves by memorising near one-to-one image–message mappings, but this gain is superfi- cial: TopSim and CI remain low, indicating poor generalisation. Since real-world assistive systems often operate under tight vocabularies, VAG-EC ’s structural advantage is practically more valuable. TopSim evaluates whether similar inputs yield similar messages. Our graph-based encoder, with attention over structured object–relation nodes, cre- ates disentangled scene representations where com- positional variations (e.g., object identity vs. color) remain linearly organised. This smooth latent ge- ometry naturally improves TopSim. CI measures whether tokens maintain consistent meanings across contexts. The baseline, trained on raw features, often develops polysemous sym- bols—especially with small V. In contrast, VAG- ECemits messages over pre -factorised graph el- ements, enforcing semantic alignment and reduc- ing token ambiguity. This explains the observed 30–60% CI improvement across all setups. 6.2 Message–Level Behaviour of VAG-EC Figure 3 (a)-(c) presents three views of message- level token usage, comparing V AG-EC with the EC baseline. In the Zipf plot, natural languages typically ex- hibit a smooth inverse–rank curve. As we can see in Figure 3(a) , the baseline’s curve drops steeply, indicating token collapse: a few symbols dominate, while the rest are underused. In contrast, V AG- EC’s distribution is flatter and closer to the ideal Zipf trend. By grounding utterances in ⟨object, relation ⟩tuples and attending over graph nodes, our model spreads
https://arxiv.org/abs/2505.22087v1
usage across mid-rank tokens, producing a more compact yet expressive code. The cumulative coverage plot quantifies lexical diversity. The baseline achieves 90% coverage with just 4 tokens, whereas V AG-EC requires 6–7—re- flecting broader symbol usage. Because each token in V AG-EC maps to a structured semantic element, the model naturally rotates through more of the vo- cabulary without relying on overloaded or generic symbols. Finally, the raw frequency histogram confirms this effect at the token level: the baseline concen- trates usage in IDs 0–2, while V AG-EC distributes usage more evenly and activates long-tail tokens. This balance is a direct result of grounding; with a fixed semantic schema, token meanings are sta- ble and purpose-specific, reducing redundancy and ambiguity—key properties for downstream deploy- ment in constrained communication channels. 7 Conclusion The V AG-EC framework represents a significant advancement in assistive communication for vi- sually impaired individuals by integrating knowl- edge graphs and attention mechanisms to overcome the limitations of existing technologies. Through systematic evaluation, the framework has demon- strated enhanced expressiveness and adaptability, offering a more structured and interpretable com- munication method. By constructing cognitive maps that mimic human visual perception, the framework facilitates efficient spatial information processing. This innovative approach not only ad- dresses current challenges but also lays the ground- work for future research in emergent semantic com- munications.8 Limitations While our KG-AEC framework introduces a cogni- tively inspired approach to emergent communica- tion, several limitations remain that point to future directions. First, our current work focuses exclusively on structured dining scenes, where the object cat- egories and spatial relations are relatively con- strained. Generalizing to a broader range of scenar- ios—such as bathrooms (e.g., locating a toothbrush near a sink) or indoor navigation (e.g., avoiding moving obstacles)—requires handling significantly more diverse semantic content. When multiple do- mains are combined into a single model, scene rep- resentations may become semantically ambiguous, making it harder for agents to extract consistent and relevant relational cues (Rita et al., 2022b; Feng et al., 2024). Second, in our current referential game setup, both training and evaluation are performed using simulated agents. In practice, the most faithful way to optimize and assess an assistive communi- cation system is through real blind participants in a human-in-the-loop paradigm. Such an approach would allow the emergent protocol to co-evolve with user preferences, tactile interpretation, and cognitive expectations. However, involving human subjects raises ethical, logistical, and methodolog- ical complexities beyond the scope of this paper (Holzinger, 2022). Moreover, designing and eval- uating such a closed-loop human–agent system is substantial enough to merit a dedicated study. This work provides theoretical and experimental ground- work toward that broader goal. References Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko Sünderhauf, Ian Reid, Stephen Gould, and Anton van den Hengel. 2018. Vision-and- language navigation: Interpreting visually-grounded navigation instructions in real environments. In 2018 IEEE/CVF Conference on Computer Vision and Pat- tern Recognition , pages 3674–3683. Timothy E.J. Behrens, Timothy H. Muller, James C.R. Whittington, Shirley Mark, Alon B. Baram, Kim- berly L. Stachenfeld, and Zeb Kurth-Nelson. 2018. What is
https://arxiv.org/abs/2505.22087v1
a cognitive map? organizing knowledge for flexible behavior. Neuron , 100(2):490–509. Irving Biederman. 1987. Recognition-by-components: A theory of human image understanding. Psycholog- ical Review , 94(2):115–147. Brendon Boldt and David Mortensen. 2024. A review of the applications of deep learning-based emergent communication. Trans. Mach. Learn. Res. Boaz Carmeli, Ron Meir, and Yonatan Belinkov. 2023. Emergent quantized communication. Proceedings of the AAAI Conference on Artificial Intelligence , 37(10):11533–11541. Rahma Chaabouni, Eugene Kharitonov, Diane Boucha- court, Emmanuel Dupoux, and Marco Baroni. 2020. Compositionality and generalization in emergent lan- guages. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics , pages 4427–4442, Online. Association for Computa- tional Linguistics. Rahma Chaabouni, Eugene Kharitonov, Emmanuel Dupoux, and Marco Baroni. 2019. Anti-efficient encoding in emergent communication . Curran As- sociates Inc., Red Hook, NY , USA. Rahma Chaabouni, Florian Strub, Florent Altché, Eu- gene Tarassov, Corentin Tallec, Elnaz Davoodi, Kory Wallace Mathewson, Olivier Tieleman, An- geliki Lazaridou, and Bilal Piot. 2022. Emergent communication at scale. In International Conference on Learning Representations . Henry Conklin and Kenny Smith. 2023. Compositional- ity with variation reliably emerges in neural networks. InThe Eleventh International Conference on Learn- ing Representations . Roberto Dessì, Eugene Kharitonov, and Marco Ba- roni. 2021. Interpretable agent communication from scratch (with a generic visual processor emerging on the side). In Proceedings of the 35th Interna- tional Conference on Neural Information Processing Systems , NIPS ’21, Red Hook, NY , USA. Curran Associates Inc. Russell A. Epstein, Eva Zita Patai, Joshua B. Julian, and Hugo J. Spiers. 2017. The cognitive map in humans: spatial navigation and beyond. Nature Neuroscience , 20(11):1504–1513. Yicheng Feng, Boshi An, and Zongqing Lu. 2024. Learning multi-object positional relationships via emergent communication. Proceedings of the AAAI Conference on Artificial Intelligence , 38(16):17371– 17379. Bernard Guelton. 2023. “mental maps”: Between memorial transcription and symbolic projection. Frontiers in Psychology , V olume 14 - 2023. Shangmin Guo, Yi Ren, Serhii Havrylov, Stella Frank, Ivan Titov, and Kenny Smith. 2019. The emergence of compositional languages for numeric concepts through iterated learning in neural agents. CoRR , abs/1910.05291. Kai Han, Yunhe Wang, Jianyuan Guo, Yehui Tang, and Enhua Wu. 2022. Vision GNN: An image is worth graph of nodes. In Advances in Neural Information Processing Systems .Andreas Holzinger. 2022. Knowledge-enhanced ma- chine learning and the future of artificial intelligence. Artificial Intelligence Review , 55:4273–4300. Toru Ishikawa. 2021. Spatial thinking, cognitive map- ping, and spatial awareness. Cognitive Processing , 22:89 – 96. Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C. Berg, Wan-Yen Lo, Piotr Dollár, and Ross Girshick. 2023. Segment anything. arXiv:2304.02643 . Łukasz Kuci ´nski, Tomasz Korbak, Paweł Kołodziej, and Piotr Miło ´s. 2021. Catalytic role of noise and necessity of inductive biases in the emergence of compositional communication. In Proceedings of the 35th International Conference on Neural Information Processing Systems , NIPS ’21, Red Hook, NY , USA. Curran Associates Inc. Shuhei Kurita and Kyunghyun Cho. 2020. Generative language-grounded policy in vision-and-language navigation with bayes’ rule. ArXiv , abs/2009.07783. Angeliki Lazaridou, Karl Moritz
https://arxiv.org/abs/2505.22087v1
Hermann, Karl Tuyls, and Stephen Clark. 2018. Emergence of linguistic communication from referential games with sym- bolic and pixel input. In International Conference on Learning Representations . Heeyoung Lee. 2024. One-to-many communication and compositionality in emergent communication. InProceedings of the 2024 Conference on Empiri- cal Methods in Natural Language Processing , pages 20794–20811, Miami, Florida, USA. Association for Computational Linguistics. David Lewis. 1986. Convention: A philosophical study. Jinhui Li, Qunjun Liang, Jiajun Liao, Senning Zheng, Kemeng Chen, and Ruiwang Huang. 2023. Repre- sentation of the inferred relationships in a map-like space. Human Brain Mapping , 44(9):3744–3757. Chunxiao Liu, Zhendong Mao, Tianzhu Zhang, Hong- tao Xie, Bin Wang, and Yongdong Zhang. 2020. Graph structured network for image-text matching. In2020 IEEE/CVF Conference on Computer Vi- sion and Pattern Recognition (CVPR) , pages 10918– 10927. Jesse Mu and Noah Goodman. 2021. Emergent commu- nication of generalizations. In Advances in Neural Information Processing Systems , volume 34, pages 17994–18007. Mustafa Munir, William Avery, and Radu Marculescu. 2023. Mobilevig: Graph-based sparse attention for mobile vision applications. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pat- tern Recognition (CVPR) Workshops , pages 2211– 2219. Mustafa Munir, William Avery, Md Mostafijur Rahman, and Radu Marculescu. 2024. Greedyvig: Dynamic axial graph construction for efficient vision gnns. In Proceedings of the IEEE/CVF Conference on Com- puter Vision and Pattern Recognition (CVPR) , pages 6118–6127. Quang Ho Nguyen, Truong Tuan Vu, Anh Tuan Tran, and Khoi Nguyen. 2023. Dataset diffusion: Diffusion-based synthetic data generation for pixel- level semantic segmentation. In Thirty-seventh Con- ference on Neural Information Processing Systems . Mitja Nikolaus. 2024. Emergent communication with conversational repair. In The Twelfth International Conference on Learning Representations . Makanjuola Adekunmi Ogunleye, Chase Vickery, and Ismini Lourentzou. 2024. Emergent corpus pretrain- ing benefits vision language modeling. Xenia Ohmer, Marko Duda, and Elia Bruni. 2022. Emer- gence of hierarchical reference systems in multi- agent communication. In Proceedings of the 29th International Conference on Computational Linguis- tics, pages 5689–5706, Gyeongju, Republic of Korea. International Committee on Computational Linguis- tics. Michael Peer, Iva K. Brunec, Nora S. Newcombe, and Russell A. Epstein. 2021. Structuring knowledge with cognitive maps and cognitive graphs. Trends in Cognitive Sciences , 25(1):37–54. Jannik Peters, Constantin Waubert de Puiseau, Hasan Tercan, Arya Gopikrishnan, Gustavo Adolpho Lu- cas de Carvalho, Christian Bitter, and Tobias Meisen. 2025. Emergent language: a survey and taxonomy. Autonomous Agents and Multi-Agent Systems , 39(1). Phillip E. Pope, Soheil Kolouri, Mohammad Rostami, Charles E. Martin, and Heiko Hoffmann. 2019. Ex- plainability methods for graph convolutional neural networks. In 2019 IEEE/CVF Conference on Com- puter Vision and Pattern Recognition (CVPR) . Ilya Reutov. 2023. Generating of synthetic datasets us- ing diffusion models for solving computer vision tasks in urban applications. Procedia Computer Science , 229:335–344. 12th International Young Scientists Conference in Computational Science, YSC2023. Ryokan Ri, Ryo Ueda, and Jason Naradowsky. 2023. Emergent communication with attention. In CogSci . Mathieu Rita, Corentin Tallec, Paul Michel, Jean- Bastien Grill, Olivier Pietquin, Emmanuel Dupoux, and Florian Strub. 2022a. Emergent communication: Generalization and overfitting in lewis games. In Advances in Neural Information Processing
https://arxiv.org/abs/2505.22087v1
Systems , volume 35, pages 1389–1404. Curran Associates, Inc.Mathieu Rita, Corentin Tallec, Paul Michel, Jean- Bastien Grill, Olivier Pietquin, Emmanuel Dupoux, and Florian Strub. 2022b. Emergent communication: Generalization and overfitting in lewis games. In Advances in Neural Information Processing Systems . Damien Teney, Lingqiao Liu, and Anton van den Hen- gel. 2017. Graph-structured representations for vi- sual question answering. Ryo Ueda and Tadahiro Taniguchi. 2024. Lewis’s sig- naling game as beta-vae for natural word lengths and segments. Ryo Ueda and Koki Washio. 2021. On the relationship between Zipf‘s law of abbreviation and interfering noise in emergent languages. In Proceedings of the 59th Annual Meeting of the Association for Compu- tational Linguistics and the 11th International Joint Conference on Natural Language Processing: Stu- dent Research Workshop , pages 60–70, Online. Asso- ciation for Computational Linguistics. Ruben van Bergen, Justus Hübotter, and Pablo Lanil- los. 2025. Object-centric proto-symbolic behavioural reasoning from pixels. Taylor Webb, Shanka Subhra Mondal, and Jonathan D Cohen. 2023. Systematic visual reasoning through object-centric relational abstraction. In Advances in Neural Information Processing Systems , volume 36, pages 72030–72043. Curran Associates, Inc. Martin Weiss, Simon Chamorro, Roger Girgis, Mar- gaux Luck, Samira Ebrahimi Kahou, Joseph Paul Co- hen, Derek Nowrouzezahrai, Doina Precup, Florian Golemo, and Christopher Joseph Pal. 2019. Navi- gation agents for the visually impaired: A sidewalk simulator and experiments. In Conference on Robot Learning . World Health Organization. 2023. Blindness and vision impairment. Zhenlin Xu, Marc Niethammer, and Colin Raffel. 2022. Compositional generalization in unsupervised com- positional representation learning: A study on disen- tanglement and emergent language. In Advances in Neural Information Processing Systems . Ling Yang, Zhilong Zhang, Yang Song, Shenda Hong, Runsheng Xu, Yue Zhao, Wentao Zhang, Bin Cui, and Ming-Hsuan Yang. 2023. Diffusion models: A comprehensive survey of methods and applications. ACM Comput. Surv. , 56(4). Enshuai Zhou, Yifan Hao, Rui Zhang, Yuxuan Guo, Zidong Du, Xishan Zhang, Xinkai Song, Chao Wang, Xuehai Zhou, Jiaming Guo, Qi Yi, Shaohui Peng, Di Huang, Ruizhi Chen, Qi Guo, and Yunji Chen. 2024. Emergent communication for numerical concepts generalization. Proceedings of the AAAI Conference on Artificial Intelligence , 38(16):17609– 17617. A Dataset Generation To simulate diverse and semantically structured din- ing environments, we construct a synthetic dataset using a compositional prompt template tailored for diffusion-based image generation. Each prompt specifies the spatial arrangement and object types in the scene, using the following template: “The center of the picture is a plate with {food} on this plate, a glass of {drink} on the {direction1} of the plate, a pair of {tool1} and {tool2} on the {direction2} of the plate, and a {tool3} next to the {drink}. The picture is photographic.” Here, {food }, {drink }, and {tool } are sam- pled from predefined category-specific vocab- ularies to ensure semantic consistency, while {direction1 } and {direction2 } are drawn from left,right ,top, and bottom , introducing controlled spatial variability. Figure 4 shows an example of a generated dining image. For evaluation, we rely on real-world dining scenes (see Figure 5) to test generalization beyond the synthetic domain. Figure 4: Example of a
https://arxiv.org/abs/2505.22087v1
generated dining scenario from the synthetic dataset. Figure 5: Example of a real-world dining scenario used for testing. B Evaluation Metrics Explanation B.1 Context Independence Context Independence (CI) measures the extent to which the generated messages remain consistent and generalizable across different environments. This is particularly important for blind individuals, as the language must be interpretable regardless of variations in the surrounding context. Given a message mand a corresponding set of concepts C, we define context independence as: CI(C, M ) =1 |C|X c∈Cpm(mc|c)·pc(c|mc),(7) where mc= arg max mpc(c|m)is the most likely message assigned to a given concept c. This formulation ensures that the generated messages are semantically stable and do not fluctuate based on minor environmental variations. A higher CI score indicates that the emergent language is more reliable and less context-dependent, allowing blind users to interpret messages consistently across dif- ferent situations. Our results show that the knowledge graph-based GESC framework consistently improves CI across various experimental configurations. This sug- gests that encoding semantic object relationships within a graph structure leads to more stable and interpretable messages, reducing the influence of dataset-specific contextual variations. B.2 Topographic Similarity TopSim measures the structural alignment between the object space and the message space, ensuring that semantically similar objects are represented with similar messages(Lazaridou et al., 2018). We compute TopSim by comparing the pairwise dis- tances in both spaces using Spearman’s rank corre- lation coefficient: TopSim =ρ(Dc, Dm), (8) where Dcrepresents the pairwise distances be- tween concept embeddings and Dmrepresents the pairwise distances between message embeddings. The Spearman correlation coefficient ρquantifies the monotonic relationship between these two dis- tance matrices. A higher TopSim score indicates that the emergent language effectively preserves the structural relationships among objects, allow- ing blind individuals to infer spatial organization and object interactions more accurately.Our experimental results demonstrate that the knowledge graph-based GESC framework achieves higher TopSim scores compared to the baseline emergent communication model. This improve- ment suggests that leveraging knowledge graphs enables the system to better capture and retain ob- ject relationships, leading to more structured and meaningful message representations.
https://arxiv.org/abs/2505.22087v1
arXiv:2505.22092v1 [cs.AI] 28 May 2025VIRAL: V ISION -GROUNDED INTEGRATION FOR REWARD DESIGN ANDLEARNING Valentin Cuzin-Rambaud, Emilien Komlenovic, Alexandre Faure, Bruno Yun Université Claude Bernard Lyon 1, France CNRS, Ecole Centrale de Lyon, INSA Lyon, Université Lumière Lyon 2, LIRIS, UMR5205, France {valentin.cuzin-rambaud,emilien.komlenovic,alexandre.faure2}@etu.univ-lyon1.fr bruno.yun@univ-lyon1.fr ABSTRACT The alignment between humans and machines is a critical challenge in artificial intelligence today. Reinforcement learning, which aims to maximize a reward function, is particularly vulnerable to the risks associated with poorly designed reward functions. Recent advancements has shown that Large Language Models (LLMs) for reward generation can outperform human performance in this context. We introduce VIRAL, a pipeline for generating and refining reward functions through the use of multi-modal LLMs. VIRAL autonomously creates and interactively improves reward functions based on a given environment and a goal prompt or annotated image. The refinement process can incorporate human feedback or be guided by a description generated by a video LLM, which explains the agent’s policy in video form. We evaluated VIRAL in five Gymnasium environments, demonstrating that it accelerates the learning of new behaviors while ensuring improved alignment with user intent. The source-code and demo video are available at: https://github.com/VIRAL-UCBL1/VIRAL andhttps://youtu.be/t4_BXugBm9Q . Keywords Reward shaping ·Large language models ·Vision 1 Introduction Reward shaping [ 1] is a fundamental challenge in Reinforcement Learning (RL), involving the design of reward functions that efficiently guide an agent towards desired behaviors. A poorly designed reward can lead to unintended behaviors, while a well-crafted one accelerates learning and ensures alignment with human intent. However, designing effective rewards is labor-intensive and requires significant expertise, particularly in complex environments [ 2,3,4,5]. Early work on automated reward design [ 6] leveraged natural language processing techniques — such as recurrent neural networks and word embeddings — to construct reward signals, demonstrating promising results in ATARI games. More recently, LLMs have gained traction in RL and robotics due to their versatility in solving diverse problems [ 7,8]. Early attempts at using LLMs for reward shaping [ 9] employed GPT-3 as a binary reward signal, but this approach was limited in scope and applicability. More recent advances [ 10,11,12] have leveraged OpenAI’s GPT-4 model to generate code for reward functions, demonstrating competitive performance and improved adaptability across different environments. However, these methods only focus on the text-based inputs (disregarding vision) and their reliance on a closed-source, and computationally expensive LLM for performance, hinders reproducibility and accessibility. We propose a Vision-grounded Integration for Reward design AndLearning (VIRAL) framework to design rewards functions from simple users prompts and/or annotated image. VIRAL sets itself from the state-of-the-art [ 10,12] in several key ways. First, VIRAL prioritizes the use of open-source, efficient, and lightweight LLMs [ 13,14], ensuring greater accessibility, cost efficiency, and transparency. Second, unlike prior methods, VIRAL integrates Large Vision Language Models (LVLMs) [ 15,16] to process both text and images, enhancing its ability to interpret user intent more accurately. Third, VIRAL is the first reward shaping approach to incorporate Video-LVLMs [ 17] for describing the Running Title for Header movements of objects within the scene, providing richer context for reward function
https://arxiv.org/abs/2505.22092v1
generation. Finally, instead of relying on direct access to the environment’s code (as in EUREKA [ 10]) or structured abstraction (through Pydantic class definitions as in Text2Reward [ 12]), VIRAL describes environments solely through its observations, following the Gymnasium [ 18] documentation. This simplifies implementation for users while ensuring the LLM captures the necessary information for coherent reward generation. Our main contributions are as follows: (1) The VIRAL pipeline to automatically design reward functions using a simple natural language prompt and/or annotated image. (2) A self-refinement process for reward functions, augmented with human feedback or video description from a LVLM. (3) A scalable and modular implementation designed to adapt to various RL problems within the Gymnasium framework, leveraging multiprocessing for faster inference. (4) An evaluation of VIRAL in five Gymnasium environments, showing its various benefits for learning and user intent alignment, using an empirical study. This paper is organized as follows: Section 2 details the architecture and inner workings of VIRAL. Section 3 presents our evaluation methodology and results. Section 4 provides concluding remarks. 2 The VIRAL Architecture This section details the VIRAL procedure to generate multiple rewards functions which are rated to find the best agent behaviors. Figure 1: The VIRAL pipeline. Given an input (a textual environment description, an optional success function, and a goal prompt), the system generates a set of reward functions and iteratively refines them. 2.1 The Input Parameters For its input, VIRAL uses a set of specific elements (see top of Figure 1). It includes ( a1) a textual environment description denv, (a2) an optional implementation of a success function fsucc provided as Python code, and (b)the goal prompt pg(which can be multi-modal). To give the prompt pg, the user has the possibility to choose between a text prompt, an annotated image img, or both of those. The img can include annotations (with arrows, text, areas, etc.). The input denvis extracted from Gymnasium and provides an accurate representation of the environment’s observable space to the LLM, adding more depth to the generated reward function. Using text for denvallows for any Gymnasium environment to be seamlessly integrated with VIRAL. The input fsucc :States (env)→ { 0,1}is a success function which must be tailored to the goal. This function determines how success is manifested within the specific context of the task, helping the LLM in designing the reward function. Note that while fsuccmust be related to the goal prompt pg, returning 1(success) can mean a partial success. For example, for a textual goal prompt pg:“Land the lander along the red trajectory shown in the image”, fsuccmay return 1if the lander managed to land, ignoring the trajectory. 2 Running Title for Header 2.2 The Initial Generation For the initial generation, we opted for zero-shot prompting, despite the effectiveness of few-shot prompting [ 19]. This was motivated by generalization needs and the difficulty for users to provide examples, making zero-shot prompting more practical and user-friendly. VIRAL implements a collaboration between two LLMs (critic and coder), with specific system prompts for their roles. The former must be a good supervisor and
https://arxiv.org/abs/2505.22092v1
specify steps to help the latter in producing good quality code. Among the strategies employed to enhance zero-shot generation, one particularly effective method is step-back prompting [ 20], which allows to obtain a broader perspective by first reasoning about a problem at a higher-level before generating a detailed response (see step 1 of Fig. 1). The critic LLM generates the step-back prompt which is given to the coder LLM (see step 2 of Fig. 1). Note that only the critic LLM needs to be multi-modal. The generated code is checked for syntax and logical issues by using a try-catch process. If errors are caught, they are sent back to the coder LLM via a custom prompt, allowing it to revise its output. 2.3 Learning a Policy The generated reward functions take an observation as parameters, along with two boolean values indicating whether there is a success or failure1. Once a reward function is generated, an RL algorithm is used to make the agent search for its policy. We restrict ourselves to Deep Q-Network (DQN) [ 21] and Proximal Policy Optimization (PPO) [ 22] and select the most suitable one for each environment (step 3 of Fig. 1). During training, rewards and observations are retrieved, and if the user wishes to, they can implement objective metric functions that take these observations and return a dictionary of useful objective metrics for this environment. During testing we can compare the success rate of our custom reward, with the baseline “legacy" reward function rlegacy or with a user-defined threshold, which determines an acceptable success rate (step 4 of Fig. 1). 2.4 The Refinement Process The refining process unfolds as follows (see step 5 of Fig. 1). The critic LLM analyzes the training results by leveraging collected statistics like state-observations during the run, and an optional user or Video-LVLM feedback. Indeed, to identify potential reasons why the reward function underperformed, a user or Qwen2.5-VL feedback can provide a more precise description of the agent’s learned behavior. These observations are passed to the coder LLM, which creates an improved reward function by addressing the identified weaknesses and aligning with the intended objectives These methods are combined into an iterative approach. In each iteration, a refined reward function is evaluated and compared to its previous version. If it does not pass the evaluation, the refinement process is repeated until the desired performance level is achieved. 3 Empirical evaluation For our evaluation, we used Qwen2.5-Coder-32B [23,13] as the coder LLM due to its performance being comparable to that of GPT-4 , making it a reliable choice for this role. For the multi-modal critic LLM, we chose Llama3.2-Vision-11B [16], as it is lightweight, open-source, and well-suited for our framework’s requirements. For Video-LVLM, we used Qwen2.5-VL-7B [24, 25]2. The evaluation was performed with a NVIDIA A40 GPU and a 12-cores Intel CPU. In our experiments, we used a selection of environments from the Gymnasium toolkit to evaluate the performance of our generated reward functions. Namely, the classic environments of CartPole andLunar Lander allowed us to test fundamental control and optimization strategies in
https://arxiv.org/abs/2505.22092v1
simple yet challenging scenarios. Next, we selected the Highway environment, where a vehicle must navigate a multi-lane road, avoiding collisions and optimizing its speed while adhering to driving rules. Given the increasing relevance of autonomous vehicles, we considered it essential to include this environment in our study as it addresses modern-day challenges in automation and decision-making. Finally, we incorporated two robotics-focused environments, Hopper andSwimmer , both derived from the MuJoCo physics simulator. These environments played a crucial role in evaluating the efficiency of the generated reward function for robotic locomotion and control systems. All results are available in CSV format in our GitHub repository. 1Note that an agent can be in a neutral state, without a success or a failure. 2Our framework is model-agnostic and allows the use of any LLM for the coder and critic LLMs and Video-LVLM. 3 Running Title for Header Better Rewards. We evaluated the efficiency of VIRAL by comparing the behavior of agents trained using our generated reward function (without the refining process) against those trained with the legacy reward function. We instantiated VIRAL with Qwen2.5-coder-32B as the critic/coder LLM (text-only goal prompt). For the CartPole environment trained with PPO, and the goal prompt "Create a reward function for the CartPole environment that encourages keeping the pole upright for as long as possible." , Table 1 shows that the policy learned with our reward function outperforms that of Gymnasium. Additionally, we can see that the cart moves slightly more on average, resulting in a more stable pole overall. Table 1: Comparison of legacy reward vs. ours (avg. over 10 runs) Reward function Cart position diff. Pole angle diff. Success rate gymnasium 0.281202 0.064587 0.58700 ours 0.308173 0.062499 0.85300 For the Highway environment trained with DQN, and the goal prompt "Control the ego vehicle to reach a high speed without collision." , we have better success rates than the legacy reward function 7 times out of 10 (with a median success rate of 0.78). Semantic Alignment. We conducted a study with 25 annotators to determine whether the learned behavior is aligned with the goal prompt provided (text, image or both)3. Each annotator annotated a subset of 120 videos (4 environments and 10 videos per goal prompt modality). On average, each annotator rated 71 videos for a total of 1777 annotated videos. For each video, annotators used 5-point Likert scale (from “Strongly disagree” to “Strongly agree”) to answer the following two questions: (1) I understand the instructions , and (2) The instructions are followed . The human evaluation allowed us to assess the effectiveness of the modality on the semantic alignment. For the first item, we obtained a mean of 4.89±0.41and with only 147 videos with an understanding score less than 5/5, highlighting that very few goal prompts were misunderstood, confirming that goal prompts were well-crafted and understandable. Fig. 2 shows the mean ratings for the second item and for each environment and modality. Overall, although using a visual goal prompt was not as good as a text-only goal prompt, we still obtained a mean of 2.14±1.42(out of 5)
https://arxiv.org/abs/2505.22092v1
across environments. However, as VIRAL’s main goal is to find the best reward, we investigated which modality led to very good behavior. Table 2 shows that, with the image modality, we discover behaviors that align more than with other modalities on Swimmer and Highway. These observations led us to the conclusion that different modalities can help us discover behaviors that align more closely with human intent, despite the number of correct behaviors in average. Enhanced with Feedback. To show the improvement of a generated reward function brought by the refining process, we can compare its performance to the one of a reward function after a single iteration of feedback obtained by Qwen2.5-VL-7B. In the LunarLander environment and defining the success criterion as a safe landing between the two yellow poles, we carried out 10 runs, with and without feedback and 100 tests at the end of each training session. We found that the use of the Video-LVLM creates reward functions that are on average 18.33% better based on the success rate. 3For a full description of the goal prompt used, we refer the reader to our Github repository. Table 2: Maximum selection of average rating per video, comparison by modality. Environment Image pg Textual pgImage+Textual pg Hopper-v5 3.41 ±1.00 4.92±0.28 4.83±0.39 LunarLander-v3 3.83 ±1.26 4.92 ±0.28 4.93±0.26 Swimmer-v5 4.19±0.75 4.14±0.95 3.41 ±1.33 Highway-fast-v0 4.44±1.29 4.06±1.03 3.91 ±1.38 4 Running Title for Header Figure 2: Semantic alignment for different environments and modalities. 4 Conclusion We introduced VIRAL, a pipeline for generating and refining reward functions through the use of multi-modal LLMs. We demonstrated that our approach outperformed legacy reward functions in terms of performance and flexibility. Our results showed that agents were able to learn new behaviors based on simplistic sketches, highlighting the robustness of our approach. Furthermore, the integration of feedback (from a Video-LVLM or a user) enabled agents to better align with the intended objectives, providing a deeper understanding of the desired behaviors. In future work, we plan to adapt pre-existing policies to learn new behaviors. This approach could pave the way for greater policy generalization and smoother transitions between different sets of complex tasks. References [1]Andrew Y . Ng, Daishi Harada, and Stuart Russell. Policy invariance under reward transformations: Theory and application to reward shaping. In Ivan Bratko and Saso Dzeroski, editors, Proceedings of the Sixteenth International Conference on Machine Learning (ICML 1999), Bled, Slovenia, June 27 - 30, 1999 , pages 278–287. Morgan Kaufmann, 1999. [2]Paul F. Christiano, Jan Leike, Tom B. Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V . N. Vishwanathan, and Roman Garnett, editors, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 4299–4307, 2017. [3]Borja Ibarz, Jan Leike, Tobias Pohlen, Geoffrey Irving, Shane Legg, and Dario Amodei. Reward learning from human preferences and demonstrations in atari. In Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicolò Cesa-Bianchi, and Roman
https://arxiv.org/abs/2505.22092v1
Garnett, editors, Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada , pages 8022–8034, 2018. [4]Kimin Lee, Laura M. Smith, and Pieter Abbeel. PEBBLE: feedback-efficient interactive reinforcement learning via relabeling experience and unsupervised pre-training. In Marina Meila and Tong Zhang, editors, Proceedings 5 Running Title for Header of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event , volume 139 of Proceedings of Machine Learning Research , pages 6152–6163. PMLR, 2021. [5]Jongjin Park, Younggyo Seo, Jinwoo Shin, Honglak Lee, Pieter Abbeel, and Kimin Lee. SURF: semi-supervised reward learning with data augmentation for feedback-efficient preference-based reinforcement learning. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022 . OpenReview.net, 2022. [6]Prasoon Goyal, Scott Niekum, and Raymond J Mooney. Using natural language for reward shaping in reinforce- ment learning. arXiv preprint arXiv:1903.02020 , 2019. [7]Brian Ichter, Anthony Brohan, Yevgen Chebotar, Chelsea Finn, Karol Hausman, Alexander Herzog, Daniel Ho, Julian Ibarz, Alex Irpan, Eric Jang, Ryan Julian, Dmitry Kalashnikov, Sergey Levine, Yao Lu, Carolina Parada, Kanishka Rao, Pierre Sermanet, Alexander Toshev, Vincent Vanhoucke, Fei Xia, Ted Xiao, Peng Xu, Mengyuan Yan, Noah Brown, Michael Ahn, Omar Cortes, Nicolas Sievers, Clayton Tan, Sichun Xu, Diego Reyes, Jarek Rettinghouse, Jornell Quiambao, Peter Pastor, Linda Luu, Kuang-Huei Lee, Yuheng Kuang, Sally Jesmonth, Nikhil J. Joshi, Kyle Jeffrey, Rosario Jauregui Ruano, Jasmine Hsu, Keerthana Gopalakrishnan, Byron David, Andy Zeng, and Chuyuan Kelly Fu. Do as I can, not as I say: Grounding language in robotic affordances. In Karen Liu, Dana Kulic, and Jeffrey Ichnowski, editors, Conference on Robot Learning, CoRL 2022, 14-18 December 2022, Auckland, New Zealand , volume 205 of Proceedings of Machine Learning Research , pages 287–318. PMLR, 2022. [8]Ishika Singh, Valts Blukis, Arsalan Mousavian, Ankit Goyal, Danfei Xu, Jonathan Tremblay, Dieter Fox, Jesse Thomason, and Animesh Garg. Progprompt: Generating situated robot task plans using large language models. In IEEE International Conference on Robotics and Automation, ICRA 2023, London, UK, May 29 - June 2, 2023 , pages 11523–11530. IEEE, 2023. [9]Minae Kwon, Sang Michael Xie, Kalesha Bullard, and Dorsa Sadigh. Reward design with language models. arXiv preprint arXiv:2303.00001 , 2023. [10] Yecheng Jason Ma, William Liang, Guanzhi Wang, De-An Huang, Osbert Bastani, Dinesh Jayaraman, Yuke Zhu, Linxi Fan, and Anima Anandkumar. Eureka: Human-level reward design via coding large language models. arXiv preprint arXiv:2310.12931 , 2023. [11] Jiayang Song, Zhehua Zhou, Jiawei Liu, Chunrong Fang, Zhan Shu, and Lei Ma. Self-refined large language model as automated reward function designer for deep reinforcement learning in robotics. arXiv preprint arXiv:2309.06687 , 2023. [12] Tianbao Xie, Siheng Zhao, Chen Henry Wu, Yitao Liu, Qian Luo, Victor Zhong, Yanchao Yang, and Tao Yu. Text2reward: Reward shaping with language models for reinforcement learning. In The Twelfth International Conference on Learning Representations , 2024. [13] Binyuan Hui, Jian Yang, Zeyu Cui, Jiaxi Yang, Dayiheng Liu, Lei Zhang, Tianyu Liu, Jiajun Zhang, Bowen Yu, Kai Dang, et al. Qwen2.5-coder technical report. arXiv preprint arXiv:2409.12186 , 2024. [14] Daya Guo, Dejian Yang,
https://arxiv.org/abs/2505.22092v1
Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. [15] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. Advances in neural information processing systems , 36, 2024. [16] Meta. Llama-3.2-11b-vision-instruct. https://huggingface.co/meta-llama/Llama-3. 2-11B-Vision-Instruct , 2024. [17] Bin Lin, Yang Ye, Bin Zhu, Jiaxi Cui, Munan Ning, Peng Jin, and Li Yuan. Video-llava: Learning united visual representation by alignment before projection. arXiv preprint arXiv:2311.10122 , 2023. [18] Mark Towers, Ariel Kwiatkowski, Jordan Terry, John U Balis, Gianluca De Cola, Tristan Deleu, Manuel Goulão, Andreas Kallinteris, Markus Krimmel, Arjun KG, et al. Gymnasium: A standard interface for reinforcement learning environments. arXiv preprint arXiv:2407.17032 , 2024. [19] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Nee- lakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems , 33:1877–1901, 2020. [20] Huaixiu Steven Zheng, Swaroop Mishra, Xinyun Chen, Heng-Tze Cheng, Ed H Chi, Quoc V Le, and Denny Zhou. Take a step back: Evoking reasoning via abstraction in large language models. arXiv preprint arXiv:2310.06117 , 2023. 6 Running Title for Header [21] V olodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing Atari with Deep Reinforcement Learning, December 2013. [22] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal Policy Optimization Algorithms, August 2017. [23] An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Zhou, Chengpeng Li, Chengyuan Li, Dayiheng Liu, Fei Huang, Guanting Dong, Haoran Wei, Huan Lin, Jialong Tang, Jialin Wang, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Ma, Jin Xu, Jingren Zhou, Jinze Bai, Jinzheng He, Junyang Lin, Kai Dang, Keming Lu, Keqin Chen, Kexin Yang, Mei Li, Mingfeng Xue, Na Ni, Pei Zhang, Peng Wang, Ru Peng, Rui Men, Ruize Gao, Runji Lin, Shijie Wang, Shuai Bai, Sinan Tan, Tianhang Zhu, Tianhao Li, Tianyu Liu, Wenbin Ge, Xiaodong Deng, Xiaohuan Zhou, Xingzhang Ren, Xinyu Zhang, Xipin Wei, Xuancheng Ren, Yang Fan, Yang Yao, Yichang Zhang, Yu Wan, Yunfei Chu, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zhihao Fan. Qwen2 technical report. arXiv preprint arXiv:2407.10671 , 2024. [24] Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhihao Fan, Jinze Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Yang Fan, Kai Dang, Mengfei Du, Xuancheng Ren, Rui Men, Dayiheng Liu, Chang Zhou, Jingren Zhou, and Junyang Lin. Qwen2-vl: Enhancing vision-language model’s perception of the world at any resolution. arXiv preprint arXiv:2409.12191 , 2024. [25] Qwen Team. Qwen2.5-vl, January 2025. 7
https://arxiv.org/abs/2505.22092v1
arXiv:2505.22093v1 [cs.CY] 28 May 2025From Coders to Critics: Empowering Students through Peer Assessment in the Age of AI Copilots Santiago Berrezueta-Guzman Technical University of Munich Heilbronn, Germany s.berrezueta@tum.deStephan Krusche Technical University of Munich Munich, Germany krusche@tum.deStefan Wagner Technical University of Munich Heilbronn, Germany stefan.wagner@tum.de Abstract —The rapid adoption of AI-powered coding assistants like ChatGPT and other coding-copilots is transforming pro- gramming education, raising questions about assessment prac- tices, academic integrity, and skill development. As educators seek alternatives to traditional grading methods susceptible to AI-enabled plagiarism, structured peer assessment could be a promising strategy. This paper presents an empirical study of a rubric-based, anonymized peer review process implemented in a large introductory programming course. Students evaluated each other’s final projects (2D game), and their assessments were compared to instructor grades using correlation, mean absolute error, and root mean square error (RMSE). Additionally, reflective surveys from 47 teams captured student perceptions of fairness, grading behavior, and preferences regarding grade aggregation. Results show that peer review can approximate instructor evaluation with moderate accuracy and foster student engagement, evaluative thinking, and interest in providing good feedback to their peers. We discuss these findings for designing scalable, trustworthy peer assessment systems to face the age of AI-assisted coding. Index Terms —Peer assessment, programming education, AI- assisted coding, GitHub Copilot, ChatGPT, academic integrity, code review, student engagement, rubric-based evaluation, com- puter science education. I. I NTRODUCTION AI-powered coding assistants have rapidly emerged as a disruptive force in programming education. Tools like GitHub Copilot and OpenAI’s ChatGPT are now widely accessible – GitHub Copilot became freely available to students upon launch, and ChatGPT reached an unprecedented 100 million users within two months of release [1]. These “ AI coding- copilots ” can automatically generate solutions to various pro- gramming tasks and even explain code in plain language. Such capabilities promise to support learners with on-demand help, but they fundamentally challenge traditional teaching and assessment practices in computer science [2], [3]. Research indicates that the advent of coding copilots has led to a shift in learning habits, with traditional resources such as YouTube being increasingly supplanted as students’ primary source of programming support [4]. Educators and researchers are increasingly concerned about the implications of these tools for academic integrity. Early analyses warn that generative AI systems raise challenges and concerns, particularly academic honesty and plagiarism [5]. If students can obtain AI-generated answers to assignments,it becomes difficult to gauge their accurate understanding or ensure the originality of their work. Instructors have noted that a submission produced with the aid of AI coding-copilots may not accurately reflect a student’s programming ability. There is also a broader worry that ubiquitous AI assistance could normalize new forms of cheating and erode students’ and institutions’ commitment to honesty in coursework [6]. As one study put it, academic integrity hangs in the balance with the rise of AI code generators [7]. Beyond integrity issues, the advent of AI coding-copilots has sparked debate about their effect on students’ learning processes and skill development. These tools might give hints, explanations, and examples that scaffold novices’ learning. However, many
https://arxiv.org/abs/2505.22093v1
educators fear that over-reliance on AI sug- gestions will short-circuit the mastery of fundamental pro- gramming skills and collaborative work in software projects [8]. A recent survey applied to computer science instructors found that while all were immediately concerned about AI- assisted cheating, they diverged on the long-term pedagogical response. Some advocated strict limitations or “bans” on AI helpers to ensure students continue practicing problem-solving and code writing unaided, reinforcing essential competencies. Others argued for embracing these tools in the curriculum – teaching students with AI and preparing them for a future where AI-assisted coding is the norm [9]. The rise of these AI pair-programmers presents a double- edged sword in programming education. Educators are now challenged to rethink course policies and assignment design to uphold educational standards [6]. Therefore, this study investigates implementing a structured, anonymized peer re- view system within a large introductory programming course. Specifically, we analyzed how accurately students evaluate their peers’ final projects using a detailed rubric, and how their assessments compare to those of instructors. In addition to quantifying grading reliability through statistical metrics, we gather student reflections on fairness, engagement, and preferences regarding grade aggregation. Drawing on quantita- tive and qualitative data from 47 teams of three students, this study offers empirical evidence supporting peer assessment as a viable pedagogical strategy for practical programming education in an era increasingly shaped by AI coding tools and reduced opportunities for critical thinking. II. R ELATED WORK Peer review has increasingly been adopted in programming education as a pedagogical tool to engage students in active learning and reflective practice. Several studies report that incorporating peer code review (PCR) activities helps students develop essential programming skills [10]. Reviewing each other’s code exposes students to diverse solution strategies and helps them learn to give and receive constructive feedback, fostering collaboration and critical thinking. PCR provides students with extensive feedback on their work than is possible from instructors alone, and it trains them in peer evaluation practices. In fact, multiple peer reviews from classmates can yield insights comparable to a single expert review [11]. Lin et al. [12] conducted a randomized controlled trial in a blended introductory programming course and found that stu- dents who participated in structured PCR showed significantly improved computational thinking skills and higher engagement levels than a control group. Furthermore, the additional feed- back and perspectives gained through peer review can increase students’ time-on-task and encourage deeper reflection. However, using peer review in programming education also presents several challenges. One is the reliability and fairness of student-generated feedback, specifically, potential biases or inconsistencies in peer grading. Novice reviewers may lack the experience to assess code rigorously, leading to variability in the quality of evaluations. Many students are initially skeptical of peer feedback, questioning the credibility of reviews written by colleagues who are still learning the material [13]. This skepticism can manifest as resistance to acting on peer comments or reluctance to critique peers openly. Another significant challenge is low student engagement in the review process. Studies have reported that participation rates in voluntary peer review activities can
https://arxiv.org/abs/2505.22093v1
be very low (under 20% of the class) without proper incentives or guidance, and the feedback provided tends to be perfunctory. For instance, reviewers may leave only brief comments (e.g., one-line re- marks), and some of these comments can be partially incorrect or superficial. Reviewers might rush through the task or feel unsure how to constructively critique a peer’s code [14]. Despite the challenges, recent work suggests strategies to mitigate these problems and enhance the effectiveness of peer review. Clear assessment rubrics and training can help standardize evaluations, reducing variability in how differ- ent students grade the same code. Anonymizing submissions and reviewers is another tactic to counteract personal bias and encourage honest feedback. Additionally, integrating a reward mechanism or gamification elements has been shown to improve student motivation and the quality of reviews, for example, adding game-like points or badges for thorough feedback led students to write longer, more specific comments in a peer review system [15]. The literature indicates that when peer review is well- structured, with appropriate scaffolding and alignment to learning objectives, it can be a powerful tool in programming education, complementing instructor feedback and activelyengaging students in learning [14], [16]. At the same time, authors note that much of the evidence for PCR’s benefits comes from student self-reports, and there is a need for more rigorous studies to objectively measure learning gains and to explore methods for addressing peer feedback reliability. Bassner et al., [17] introduce Iris, an AI-driven virtual tutor integrated into the interactive learning platform Artemis, which provides personalized, context-aware support for computer science students. While Iris focuses on fostering independent problem-solving through subtle hints and Socratic questioning, its design illustrates how AI tools can support the development of critical thinking skills—an essential complement to the peer assessment strategies discussed in this paper. While prior research has explored the pedagogical benefits and challenges of peer code review in programming edu- cation, most studies focus on subjective reports, small-scale implementations, or contexts without comparison to expert evaluation. What differentiates our work is its empirical focus on the accuracy and perception of structured peer assessment in a large, real-world introductory programming course. By combining rubric-based evaluation, statistical alignment with instructor grades, and reflective survey responses from 47 teams, our study offers a comprehensive, data-driven perspec- tive on the reliability, fairness, and pedagogical value of peer review, providing actionable insights for the design of scalable and trustworthy peer evaluation systems in CS education. III. C OURSE CONTEXT Fundamentals of Programming (FoP) is an introductory, hands-on course for first-semester students in the Informat- ics Bachelor’s program. No prior programming knowledge is required. The curriculum covers essential topics such as control structures, data types, Object-Oriented Programming (OOP) concepts, streams, graphical user interfaces (GUIs), and recursion [18]. Its assessment is entirely based on practical activities. As shown in Table I, 60% of the final grade comes from developing and presenting a final project. TABLE I OVERVIEW OF THE GRADING SCHEMA FOR FOP Activity Grade percentage Homework exercises 20 % In-class exercises 20 % Project development 40 % Project presentation 20 %
https://arxiv.org/abs/2505.22093v1
Total grade 100 % The project centers on creating a classic-style 2D game inspired by the concept of Maze Runner . The gameplay revolves around steering a character through a complex maze. The main goal is to progress from the starting point to the exit while dealing with hidden traps, hostile entities, and a locked gate that can only be opened with a key. The maze features walls, forming an intricate network of paths, dead ends, and twists. Players must carefully navigate the environment, evade or confront threats on the map, and find keys necessary to unlock the exit and complete the level. Students had 10 weeks to complete the project, working in teams of three members. Each team can choose an advisor (a course student assistant). While we provide a problem description, a Java project template, a continuous integration (CI) tool, and guidance on using Git for version control, no formal training is given in task coordination, conflict reso- lution, collaborative project management, or use of artificial intelligence (AI) tools. Table II outlines the gameplay, design, and user experience grade distribution. Table III focuses on software engineering practices such as code and documentation. Each category includes several subcomponents totaling a maximum of 100 points (+ up to 10 bonus points). Bonus criteria were included in select categories to reward teams implementing additional features beyond the minimal requirements. TABLE II RUBRIC FOR EVALUATING GAMEPLAY DESIGN ,GRAPHICAL QUALITY , SOUND DESIGN ,AND GUI FEATURES OF THE PROJECT . Category and explanation Points Game World 25 Entrance and multiple exits 3 Logical level design, challenges 7 Reasonable obstacle placement 4 Enemy AI and interaction 4 Key/exit mechanism 2 Pick-up items (power-ups, lives) 5 Bonus: Additional features +3 Main Character 15 4-direction movement 3 Collision detection 3 Lives lost by triggers (traps) 3 Camera follows player 3 Sprinting ability 3 Bonus: Player mechanics +3 GUI 15 Main menu options 4 Persistent HUD (lives, keys, etc.) 2 Victory/Game Over with return 4 Scoreboard summary 5 Bonus: Responsive/friendly UI +1 Sound Design 10 Music for gameplay and menus 6 SFX for actions/events 4 Bonus: Layered sounds +1 Graphics 10 Style matches theme 3 Resolution scaling 1 No visual discomfort 2 No graphical errors/artifacts 4 Bonus: Art style/detailing +2 TABLE III RUBRIC FOR ASSESSING CODE STRUCTURE AND DOCUMENTATION . Category and explanation Points Code Structure 15 Object-oriented structure 4 Use of superclasses 4 No code duplication 3 Use of delegation/inheritance 4 Documentation 10 JavaDoc for methods 6 Comments in long methods 2 README with instructions 2IV. M ETHODOLOGY Once the project’s development phase was completed and before the final presentation, we implemented a structured peer-review process. We developed an algorithm that anony- mously assigned each team to review two projects created by other groups in the class, ensuring that no team reviewed its own project or the same project more than once. The reviews were conducted using a detailed grading rubric that outlined the evaluation criteria across several dimensions, including game mechanics, user interface, sound design, graphical qual- ity, code structure, and documentation. This evaluation rubric differentiates from
https://arxiv.org/abs/2505.22093v1
previous related work because it assessed the functional and technical quality of each team’s program- ming project, not only the code. A. Assessment process Gameplay assessment Each assessor team must clone and play the game developed by their assigned peers to complete the evaluation rubric detailed in Table II. In addition to the structured criteria, the rubric includes a few provocative questions, such as ”Is this game better than yours? Please justify your answer.” This prompt encouraged students to evaluate their peers’ work not only against rubric criteria but also against their own project’s quality. Finally, to qualify for the final project presentation, the game must be fully functional, executable, and playable—a determination that is also left to the assessors’ judgment. Code Assessment To complete the evaluation described in Table III, students must apply their knowledge of OOP, clean code principles, and library management. This enables them to identify errors and understand their peers’ projects to assign fair and accurate grades for each criterion. Additionally, they must assess the quality of the project’s documentation, includ- ing how clearly the instructions for playing and maintaining the game are written. Both assessments are submitted through a unified form on a dedicated Confluence page, where only the instructors can access the identities of the assessor and assessed teams. This anonymity ensures impartial evaluations and allows instructors to conduct a thorough analysis afterward. B. Reliability analysis Each project was also evaluated by the course instructors using the same rubric. We encouraged all teams to involve every member in discussing each evaluation criterion collabo- ratively to minimize discrepancies and reduce potential biases that might arise from individual assessments. Furthermore, requiring assessors to justify their assigned grades prompted students to carefully analyze peer submissions and offer fair and constructive feedback. We compared the peer-assigned scores with those given by instructors to analyze the degree of alignment and identify discrepancies. This comparison served as the basis for evaluating the accuracy of peer assessments. We also calculated key statistical metrics for peer evalua- tions against the instructor’s score: Pearson correlation coeffi- cient, mean absolute error (MAE), and root mean square error (RMSE). It provided insights into the potential of integrating peer review as a pedagogical tool in introductory programming education. C. Reflexive analysis To complement the quantitative evaluation of peer grading accuracy, we administered a short reflective survey to all teams immediately after completing their peer review tasks but before receiving the results. The goal was to capture students’ perceptions and attitudes toward the peer assessment process. The survey included Likert-scale and open-ended questions. Specifically, teams were asked whether they believed the grades given by peers would be higher or lower than those assigned by instructors. They were asked to self-assess the strictness of their own evaluations using a four-point scale ranging from Very Strict toNot Strict . Additionally, teams reflected on whether they considered their evaluations fair and whether they would prefer the average or the highest peer-assigned scores to be counted if discrepancies with instructor grades emerged. Finally, students were asked whether they enjoyed acting as evaluators. This
https://arxiv.org/abs/2505.22093v1
feedback aimed to provide insights into the perceived fairness and acceptability of peer grading and the students’ level of engagement in the process. V. R ESULTS A. Reliability analysis results Table IV shows that Peer Review 1 had a moderate positive correlation with instructor grades ( r= 0.55), a MAE of 9.18, and an RMSE of 14.87. Peer Review 2 exhibited a slightly weaker correlation ( r= 0.50), with a higher MAE of 10.68 and RMSE of 16.37. TABLE IV COMPARISON OF PEER REVIEW ACCURACY VS INSTRUCTORS ’ Metric Peer Review 1 Peer Review 2 Correlation 0.55 0.50 Mean Absolute Error (MAE) 9.18 10.68 Root Mean Square Error (RMSE) 14.87 16.36 These results suggest that while peer reviewers were gen- erally aligned with instructors, there were some discrepancies and outliers. This is more evident in Figure 1, where each peer review score is compared against the corresponding instructor grade. While many data points cluster around the Perfect Match Line (the red dashed line) that indicates agreement, several deviations are visible, particularly for Peer Review 2, reinforcing the RMSE results. This means some peer reviewers assigned significantly inflated (to the right of the Perfect Match Line) or deflated grades (to the left of the Perfect Match Line) relative to the instructor’s score. Figure 2 presents that the median and interquartile ranges are relatively close across all three distributions, indicating general consistency in grading. However, several outliers can be observed, particularly in Peer Review 2, indicating that some peer-assigned scores were significantly higher or lower than the instructor’s evaluation. Fig. 1. Comparison between peer review scores and instructor grades. Fig. 2. Distribution of Peer Review 1, Peer Review 2, and Instructor Grades highlighting the median, interquartile ranges, and outliers (diamonds). These results indicate that while peer assessment can often approximate instructor evaluations, variability remains a con- cern. The peer review 1 was closer to the instructor’s grade than the second, suggesting that reviewer training, motivation, or randomness in assignment may influence review quality. B. Reflexive analysis results On the other hand, the results collected in the post- evaluation survey provided insights into student perceptions of fairness, grading behavior, and engagement with the peer review process. Expected Peer vs. Instructor Grades. When asked whether they believed their project would receive higher, lower, or similar grades from their peers compared to the instructor, most teams anticipated more favorable peer eval- uations. Specifically, 23 teams (49%) expected a higher grade from their peers, 14 teams (30%) expected a lower grade, 8 teams (17%) expected similar outcomes, and 2 teams (4%) were undecided. This distribution suggests that many students perceive peers as more lenient or empathetic evaluators. Grading Strictness Toward Peers. Teams were asked to self-report the level of strictness they applied when evaluating other teams. Of the 47 teams, 26 (55%) described their grading as “normal,” 15 teams (32%) as “strict” or “very strict,” and 6 teams (13%) as mixed or context-dependent. Notably, no team reported being overly lenient, indicating a shared sense of responsibility in applying the evaluation rubric. We corroborated this by Figure 1, which
https://arxiv.org/abs/2505.22093v1
shows less dispersed points to the right of the Perfect Match Line than to the left. Fairness in Peer Assessment. All 47 teams (100%) be- lieved they provided fair evaluations. Many cited the consistent use of the grading rubric, collaborative discussion within the team, and an effort to remain unbiased despite challenges such as unclear documentation or incomplete features in the reviewed projects. This widespread belief in fairness supports the pedagogical value of structured peer review. Peer Comparison Reflections. In the reflective and com- parative question: “Is this game better than yours? Please justify your answer. ” Interestingly, 28 teams (60%) answered “No,” stating that their own game was equal to or better than the one they reviewed. Their justifications often included reasons such as greater technical complexity, a more polished user interface, or more complete functionality. Conversely, 15 teams (32%) admitted that the reviewed game was better than theirs. These teams frequently cited innovative design choices, a higher level of polish, or additional gameplay features as reasons for their evaluation. The remaining 4 teams (8%) gave neutral or uncertain answers, often noting that both games had different strengths and weaknesses. Responses varied from technical comparisons to more subjective impressions, such as creativity and playability. This question proved effective in encouraging students to think critically and reflect on the strengths and limitations of their own work. Interestingly, when comparing their answers to the final grades, 38 teams (82%) were accurate in their self-assessment. These results suggest that most students could recognize the relative quality of their projects compared to others, demonstrating a healthy level of self-awareness and evaluative judgment. Preferred Grade Policy. To explore student preferences in how peer evaluations should be factored into final grades, teams were asked whether they would prefer the average or the highest peer score in cases of discrepancy. A substantial majority, 32 teams (68%) preferred the highest score to be used, citing concerns about receiving an unfairly low score from a single reviewer, 12 teams (26%) preferred the average, and 3 teams (6%) proposed using the instructor grade alone. Enjoyment of the Evaluator Role. We asked about their evaluator experience, 39 teams (83%) reported enjoying the role. Comments emphasized the opportunity to explore differ- ent design ideas, learn from other teams’ solutions, and gain empathy for the grading process. Six teams (13%) gave neutral responses, while only two teams (4%) indicated they did not enjoy the task, typically citing incomplete projects as reasons. VI. D ISCUSSION Working in teams is a fundamental aspect of learning pro- gramming, as it mirrors real-world software development prac- tices and fosters collaborative problem-solving [19]. Team- work allows students to share diverse perspectives, divide complex tasks, and learn from one another’s strengths and mistakes. This collaborative dynamic becomes even more valu- able during peer assessment, where evaluating other teams’projects as a group encourages critical discussion, more pro- found understanding of quality code, and more balanced, fair evaluations. The findings of this study provide important insights into the feasibility and reliability of using peer assessment in introductory programming education. Our results suggest
https://arxiv.org/abs/2505.22093v1
that structured peer review, supported by detailed rubrics and anonymized processes, can be a reasonably accurate proxy for instructor evaluation. Additionally, we anticipate that a good peer assessment involves the evaluators discussing the rubrics and understanding the project’s components before providing feedback and a grade for each one. Finding 1: Close alignment with instructor grades. The statistical comparison between peer and instructor evaluations demonstrated a moderate correlation for both review rounds. The minimal difference, reflected in lower MAE and RMSE values for the Peer Reviewer 1, may be attributed to variations in student engagement, attention to the rubric, or reviewer fatigue. Prior studies have similarly noted that reviewer vari- ability, especially among novice programmers, can affect the consistency of peer evaluations [13], [14]. On the other hand, Peer Review 2 exhibited greater dispersion and more outliers, which could stem from the random assignment of weaker or less motivated reviewers. Despite this variation, the results reaffirm prior literature that peer feedback can approximate expert grading under structured conditions [11], [12]. Finding 2: Structured rubrics enhance reliability. The rubric provided a shared framework that likely con- tributed to the overall alignment with instructor scores. Nevertheless, a few projects were significantly underrated by their peers, raising questions about addressing bias and ensuring equity in grading. These variations highlight oppor- tunities and challenges in implementing such systems at scale in more software engineering projects in other courses. Finding 3: Social and cognitive dimensions. All 47 teams believed their evaluations were fair, even as nearly half anticipated receiving more favorable scores from peers than from instructors. This belief in fairness, paired with widespread enjoyment of the evaluator role (83%) suggests that students took the responsibility seriously and found value in critically review- ing others’ projects. These perceptions are consistent with earlier findings that peer review can foster reflection, deepen engagement, and build evaluative judgment among learners [10]. However, a few teams preferred to consider only the highest grade in case of discrepancies, highlighting concerns about potential unfairness from a single peer review. Finding 4: Self-awareness and evaluative judgment. The majority of students were able to accurately assess the relative quality of their own projects compared to those of their peers. This reflects a strong sense of self-awareness and eval- uative judgment—key competencies in both academic and professional programming contexts. The ability to critically reflect on one’s work and compare it fairly against external benchmarks suggests that peer assessment supports grading and fosters metacognitive skills. Such reflective capabilities are especially important in the age of AI-assisted coding, where understanding quality, originality, and design trade-offs becomes as important as writing code itself. While statistical alignment with instructor scores is a posi- tive indicator, it is equally important to cultivate transparency and offer feedback loops that allow learners to reflect on the quality of their evaluations. In future iterations, additional training, calibration exercises, or use of tutor-assisted scoring could further improve the consistency and fairness of peer assessments. VII. C ONCLUSIONS This study examined the effectiveness of structured, anonymized peer assessment in a large-scale introductory programming course, demonstrating that students
https://arxiv.org/abs/2505.22093v1
can reliably evaluate each other’s work using detailed rubrics. The align- ment between peer and instructor evaluations and positive student perceptions of fairness and engagement underscores the pedagogical potential of peer review in programming education. Importantly, peer assessment also promotes key competencies that are increasingly vital in the age of AI copilots—namely, critical thinking, evaluative judgment, and collaborative reflection. As coding assistants automate more aspects of programming tasks, assessing, critiquing, and un- derstanding quality code becomes essential for learners to remain active participants in the development process, rather than passive users of AI-generated solutions. As programming education evolves in response to techno- logical shifts, scalable and participatory assessment models will become increasingly important. Future research should explore ways to refine peer review systems through reviewer training, calibration sessions, or adaptive weighting of scores. Additionally, integrating formative peer assessment across multiple stages of the project lifecycle, not only at the end, could offer ongoing feedback and learning opportunities. In- vestigating the long-term impact of such practices on learning outcomes, ethical reasoning, and professional readiness in AI-augmented environments remains a valuable direction for further work. Ultimately, our findings suggest that well-designed peer assessment is a practical response to the challenges posed by AI coding tools and a meaningful strategy for empowering students in reflective, responsible, and collaborative learning environments.REFERENCES [1] C. I. Chang, W. C. Choi, and I. C. Choi, “A systematic literature review of the opportunities and advantages for aigc (openai chatgpt, copilot, codex) in programming course,” in Proceedings of the 2024 7th International Conference on Big Data and Education , pp. 29–35, 2024. [2] S. Lau and P. Guo, “From” ban it till we understand it” to” resistance is futile”: How university programming instructors plan to adapt as more students use ai code generation and explanation tools such as chatgpt and github copilot,” in Proceedings of the 2023 ACM Conference on International Computing Education Research-Volume 1 , pp. 106–121, 2023. [3] M. Wermelinger, “Using github copilot to solve simple programming problems,” in Proceedings of the 54th ACM Technical Symposium on Computer Science Education V . 1 , pp. 172–178, 2023. [4] S. Berrezueta-Guzman, P. Bassner, S. Wagner, and S. Krusche, “Code collaborate: Dissecting team dynamics in first-semester programming students,” in 2024 21st International Conference on Information Tech- nology Based Higher Education and Training (ITHET) , pp. 1–10, IEEE, 2024. [5] A. Simmons, M. Holanda, C. Chamon, and D. Da Silva, “Ai generated code plagiarism detection in computer science courses: A literature mapping,” in 2024 IEEE Frontiers in Education Conference (FIE) , pp. 1–7, IEEE, 2024. [6] J. Hutson, “Rethinking plagiarism in the era of generative ai,” Journal of Intelligent Communication , vol. 3, no. 2, pp. 20–31, 2024. [7] S. A. Bin-Nashwan, M. Sadallah, and M. Bouteraa, “Use of chatgpt in academia: Academic integrity hangs in the balance,” Technology in Society , vol. 75, p. 102370, 2023. [8] G. Akc ¸apınar and E. Sidan, “Ai chatbots in programming education: guiding success or encouraging plagiarism,” Discover Artificial Intelli- gence , vol. 4, no. 1, p. 87, 2024. [9] D. R. Cotton, P. A.
https://arxiv.org/abs/2505.22093v1
Cotton, and J. R. Shipway, “Chatting and cheating: Ensuring academic integrity in the era of chatgpt,” Innovations in education and teaching international , vol. 61, no. 2, pp. 228–239, 2024. [10] S. Bradley, “Addressing bias to improve reliability in peer review of programming coursework,” in Proceedings of the 19th Koli Calling International Conference on Computing Education Research , pp. 1–10, 2019. [11] S. Strickroth, “Does peer code review change my mind on my sub- mission?,” in Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V . 1 , pp. 498–504, 2023. [12] X. Lin, Y . Ma, W. Ma, Y . Liu, and W. Tang, “Using peer code review to improve computational thinking in a blended learning environment: A randomized control trial,” Computer Applications in Engineering Education , vol. 29, no. 6, pp. 1825–1835, 2021. [13] A. Alkhalifa and M. Devlin, “Student perspectives of peer assessment in programming courses,” in Proceedings of the 2021 Conference on United Kingdom & Ireland Computing Education Research , pp. 1–7, 2021. [14] T. Brown, M. R. Narasareddygari, M. Singh, and G. Walia, “Using peer code review to support pedagogy in an introductory computer programming course,” in 2019 IEEE Frontiers in Education Conference (FIE) , pp. 1–7, IEEE, 2019. [15] T. D. Indriasari, P. Denny, D. Lottridge, and A. Luxton-Reilly, “Gam- ification improves the quality of student peer code review,” Computer Science Education , vol. 33, no. 3, pp. 458–482, 2023. [16] T. D. Indriasari, A. Luxton-Reilly, and P. Denny, “A review of peer code review in higher education,” ACM Transactions on Computing Education (TOCE) , vol. 20, no. 3, pp. 1–25, 2020. [17] P. Bassner, E. Frankford, and S. Krusche, “Iris: An ai-driven virtual tutor for computer science education,” in Proceedings of the Conference on Innovation and Technology in Computer Science Education , ITiCSE 2024, p. 394–400, ACM, 2024. [18] S. Krusche and J. Berrezueta-Guzman, “Introduction to programming using interactive learning,” in 2023 IEEE 35th International Conference on Software Engineering Education and Training (CSEE&T) , pp. 178– 182, IEEE, 2023. [19] S. Berrezueta-Guzman, I. Parmacli, M. K. Habib, S. Krusche, and S. Wagner, “Assessing teamwork dynamics in software development projects,” arXiv preprint arXiv:2501.11965 , 2025.
https://arxiv.org/abs/2505.22093v1
arXiv:2505.22096v1 [cs.CL] 28 May 2025Knowledge Base Construction for Knowledge-Augmented Text-to-SQL Jinheon Baek1*Horst Samulowitz2Oktie Hassanzadeh2 Dharmashankar Subramanian2Sola Shirai2Alfio Gliozzo2Debarun Bhattacharjya2 KAIST1IBM Research2 jinheon.baek@kaist.ac.kr {samulowitz, hassanzadeh}@us.ibm.com dharmash@us.ibm.com solashirai@ibm.com {gliozzo, debarunb}@us.ibm.com Abstract Text-to-SQL aims to translate natural language queries into SQL statements, which is practi- cal as it enables anyone to easily retrieve the desired information from databases. Recently, many existing approaches tackle this problem with Large Language Models (LLMs), lever- aging their strong capability in understanding user queries and generating corresponding SQL code. Yet, the parametric knowledge in LLMs might be limited to covering all the diverse and domain-specific queries that require grounding in various database schemas, which makes gen- erated SQLs less accurate oftentimes. To tackle this, we propose constructing the knowledge base for text-to-SQL, a foundational source of knowledge, from which we retrieve and gener- ate the necessary knowledge for given queries. In particular, unlike existing approaches that either manually annotate knowledge or gener- ate only a few pieces of knowledge for each query, our knowledge base is comprehensive, which is constructed based on a combination of all the available questions and their associ- ated database schemas along with their rele- vant knowledge, and can be reused for unseen databases from different datasets and domains. We validate our approach on multiple text-to- SQL datasets, considering both the overlapping and non-overlapping database scenarios, where it outperforms relevant baselines substantially. 1 Introduction Text-to-SQL aims to transform natural language queries from users into Structured Query Language (SQL) statements, to interact with and retrieve the information from databases (Zelle and Mooney, 1996; Xu et al., 2017; Yaghmazadeh et al., 2017; Cai et al., 2018), as illustrated in Figure 1 (A). This task has recently gained much attention since it al- lows non-experts to access and manipulate database information without needing to understand com- plex database languages. In the meantime, Large *Work done during an internship at IBM Research.Language Models (LLMs) have shown impressive capabilities in processing and generating text and code, which have been further extended for text-to- SQL (Rajkumar et al., 2022; Gao et al., 2024). Despite their huge successes, transforming user queries into SQL statements may still be challeng- ing due to the need for specific domain knowledge and an understanding of the underlying database schemas, which poses a significant hurdle even for the most advanced LLMs to achieve high accuracy across diverse datasets (Li et al., 2023). For ex- ample, consider a scenario where the user asks for the query: "What is the WACC for Company X?". To accurately translate this into an SQL statement, the text-to-SQL model should understand the con- cept and calculation of Weighted Average Cost of Capital (WACC), which involves multiple factors including the cost of equity, cost of debt, and the re- spective proportions of each in the capital structure. In addition, the model needs to comprehend the specific schema of the financial database, where relevant data is distributed across multiple tables such as ’Equity’, ’Debt’, and ’Capital Structure’. To tackle the aforementioned limitations due to the lack of the domain-specific knowledge for SQL generation, recent studies have
https://arxiv.org/abs/2505.22096v1
proposed collecting and annotating explicit knowledge, which is then leveraged for SQL generation (Dou et al., 2022; Li et al., 2023). However, while these approaches substantially improve the performance of existing text-to-SQL models, they rely on extensive human annotations, which may be suboptimal (and nearly impractical) to conduct for all queries considering a diverse source of domain-specific knowledge from numerous databases. To address this issue, recent work proposes generating a few pieces of knowl- edge for each query based on the query itself and its relevant database schema (Hong et al., 2024) (see Figure 1 (B)). However, although this method demonstrates promise in automatic knowledge gen- eration, certain knowledge required for one query Database What is the Weighted Average Cost of Capital (WACC) for ABC? SELECT (e.Value / (e.Value + d.Value )) … FROM Equity e JOIN Debt d ON e.ID = d.ID JOIN Company c ON e.ID = c.ID WHERE c.CompanyName = ‘ABC'; ABC’s WACC is 5.9%User Model Answer(A) Text -to-SQL Task What is the Weighted Average Cost of Capital (WACC) for ABC? SELECT (e.Value / d.Value ) + (d.Value / e.Value ) … FROM Equity e JOIN Debt d ON e.ID = d.ID ... WHERE c.CompanyName = ‘ABC';User Model(B) Existing Text -to-SQL w/ Knowledge Generation Knowledge1) The Equity table contains columns MarketValue and Cost, … 2) The WACC formula is (EquityValue ) * DebtCost + (DebtValue ) * EquityCost * … What is the Weighted Average Cost of Capital (WACC) for ABC? SELECT (e.Value / (e.Value + d.Value )) … FROM Equity e JOIN Debt d ON e.ID = d.ID ... WHERE c.CompanyName = ‘ABC';User Model(C) Our Text -to-SQL w/ Knowledge Base Knowledge1) The Equity table contains columns MarketValue and Cost, … 2) The WACC formula is calculated as (EquityValue / TotalValue ) * EquityCost + (DebtValue / TotalValue ) * DebtCost * …Retrieval Knowledge BaseEquity table has columns … WACC formula : Equity / … Direct Generation Inaccurate Formulation Equity table has columns … ROE formula: Net Income / … Debt Ratio: Total Liabilities / … EBITDA formula: Net Income … WACC formula : Equity / … DCF: Sum of Discounted Cash … Quick Ratio: (Current Assets - … P/E Ratio: Market Price per … CAPM formula: Risk -Free Rate … NPV calculation: ∑ (Cash Flow … EPS formula: Net Income / … Queries Schema Knowledge Expansion 0102030405060Knowledge Coverage (%)Knowledge in Training Set Knowledge in Knowledge BaseFigure 1: (A) Text-to-SQL aims to translate a user query into a SQL statement executable over a database, to access the desired information. (B) Existing Text-to-SQL with Knowledge Generation approaches first generate the knowledge relevant to the user query and then formulate the SQL statement with this generated knowledge. (C) Our Text-to-SQL with Knowledge Base Construction approach builds the repository of the knowledge and then reuses the knowledge within it across multiple queries and databases. (Right:) We observe that the knowledge in the training set of the text-to-SQL benchmark dataset (Li et al., 2023) covers 21% of the knowledge required for test-time queries, and our constructed knowledge base further covers 50% of them.
https://arxiv.org/abs/2505.22096v1
can be directly reused or provide insights for multi- ple queries within the same database, as shown in Figure 1 (Right). Also, this knowledge can be gen- eralizable to other queries for different databases. Motivated by these observations, this work pro- poses an automatic approach to build a knowledge base, designed to serve as a comprehensive reposi- tory of domain-specific knowledge for text-to-SQL and capable of providing knowledge for multiple queries with the same database and even across the different databases. To construct this knowledge base, we generate knowledge entries based on avail- able samples and their associated database schemas through LLM prompting, and then compile all of them together. During this prompting process, we provide LLMs with relevant examples to contex- tualize and guide the generation of useful knowl- edge in the right format that is further grounded in the database schema. Then, once constructed, the knowledge base allows for the retrieval of relevant knowledge for the given test-time query, which is then used alongside the query to formulate the SQL statement. Note that while ideally the knowledge base would cover all possible queries, it may not always do so. Nevertheless, the existing knowledge in it could still offer valuable insights for generat- ing the required knowledge for new queries. Thus, by leveraging similar knowledge from the knowl- edge base, we further prompt LLMs to produce the most suitable knowledge for the query at inference time. We call our method Knowledge-Augmented Text-to-SQL (KAT-SQL), depicted in Figure 1 (C). We experimentally validate the proposed KAT- SQL on two different text-to-SQL scenarios, in- volving both the overlapping and non-overlapping databases between training and test phases, show- ing that the proposed knowledge base construction- based text-to-SQL approach surpasses the exist-ing (knowledge-augmented) text-to-SQL baselines. We also assess the generalizability of our knowl- edge base constructed from one dataset by apply- ing it to different datasets that lack any annotated knowledge, demonstrating that our knowledge base is versatile and can effectively improve SQL gener- ation for even unseen databases from other datasets. 2 Related Work LLM-Powered Text-to-SQL LLMs have shown remarkable performances across a wide range of tasks (OpenAI, 2023; Anil et al., 2023; AI@Meta, 2024), including text-to-SQL, due to their strong ca- pability in understanding natural language and gen- erating structured code (Rajkumar et al., 2022; Gao et al., 2024). Specifically, various studies have de- veloped and advanced the prompting techniques for text-to-SQL, for example, using Chain-of-Thought (CoT) (Wei et al., 2022; Liu and Tan, 2023; Tai et al., 2023), investigating sophisticated prompt de- sign strategies (Chang and Fosler-Lussier, 2023), and aggregating LLM-generated outputs from mul- tiple prompts (Lee et al., 2024; Dong et al., 2023) akin to self-consistency (Wang et al., 2023b). In addition, another line of study proposes decompos- ing the text-to-SQL problem into multiple subtasks, and feeding the solutions of subtasks (from multi- ple models or agents) into the LLM to derive the final SQL statement (Gu et al., 2023; Pourreza and Rafiei, 2023; Wang et al., 2023a). The knowledge internalized in LLMs might however not be suffi- cient to handle diverse
https://arxiv.org/abs/2505.22096v1
queries, which oftentimes requires grounding in the database schemas or addi- tional domain-specific information for specialized domains, which gives rise to the need for leverag- ing external knowledge for text-to-SQL. Knowledge-Augmented Text-to-SQL There are a few recent studies that propose augmenting text- to-SQL models with explicit knowledge. Specif- ically, Dou et al. (2022) collect formulaic knowl- edge (e.g., Trade Balance = Exports – Imports) available from public resources such as finance reports and store the collected knowledge into a knowledge bank with proper human-involved post- processing. The text-to-SQL model then retrieves relevant knowledge for any given query from the knowledge bank and uses it to convert the query into the SQL statement. In addition, Li et al. (2023) release a large-scale benchmark dataset for the text- to-SQL task, where each question is associated with specific knowledge that is manually annotated by humans. Manual annotation is however costly and time consuming, requiring effort and expertise on the part of domain-experts. To address this chal- lenge, more recent work proposes automatically generating the knowledge based on the question and database schema, and utilizing this knowledge for text-to-SQL (Hong et al., 2024). In our work, instead of generating only a few pieces of knowl- edge for each question, we propose to construct a comprehensive knowledge base. This provides a repository of reusable knowledge that can be lever- aged across multiple queries, which can be further adapted to various databases over different domains in a scalable way, in contrast to existing work. Data Generation with LLMs The recent advent of LLMs has revolutionized the field of data gen- eration, as they can produce vast amounts of high- quality samples without costly human annotation. Specifically, several efforts around LLM-based syn- thetic data generation, such as Self-Instruct (Wang et al., 2023c), Alpaca (Taori et al., 2023), Evol- Instruct (Xu et al., 2023), Orca (Mukherjee et al., 2023), and InstructLab (Sudalairaj et al., 2024), propose generating a large number of samples from LLMs by prompting them. Also, motivated by the capabilities of LLMs in generating synthetic data and memorizing factual knowledge, some other work aims to populate an encyclopedic knowledge base like Wikidata (Vrandecic and Krötzsch, 2014) with LLMs (Alivanistos et al., 2022; Nayak and Timmapathini, 2023; Veseli et al., 2023). Most of the knowledge in such encyclopedic knowledge bases is however unsuitable for text-to-SQL since it is neither relevant to formulate SQL statements from user queries nor aware of database schemas necessary for the query conversion. Thus, unlike them, our approach stands apart as the first to auto- matically construct a text-to-SQL knowledge base.3 Method In this section, we present Knowledge-Augmented Text-to-SQL (KAT-SQL), an approach that auto- matically constructs a knowledge base and utilizes the relevant knowledge from it for text-to-SQL. 3.1 Problem Statement We begin with formally explaining text-to-SQL and the knowledge augmentation technique for it. Text-to-SQL Text-to-SQL aims to translate a nat- ural language query from a user into a syntactically correct and semantically precise SQL statement. Formally, let qbe the user query (consisting of a sequence of tokens) and Dbe the database schema containing multiple
https://arxiv.org/abs/2505.22096v1
tables and columns. Then, the SQL generation model fcan be represented as fol- lows:s=f(q,D)where sis the SQL statement (consisting of a sequence of tokens) that attempts to retrieve the information requested by qoverD. In this work, we operationalize fwith LLMs, to harness their strong capability in understanding the semantics of qand generating the correspond- ing SQL code s, as follows: s=LLMθ(T(q,D)) where θis the model parameters and Tis the prompt template. Typically, the model parame- tersθremain fixed due to the high costs associated with further fine-tuning of them and sometimes their limited accessibility. Also, the prompt tem- plateTserves as a structured format that outlines the context, which includes task descriptions and instructions as well as few-shot demonstrations, to guide the model in generating accurate SQL codes. Notably, while there have been great successes in advancing the LLM itself and optimizing its usage for text-to-SQL, such as using advanced prompting techniques or breaking down the task into multiple subtasks (Wei et al., 2022; Liu and Tan, 2023; Tai et al., 2023; Gu et al., 2023; Pourreza and Rafiei, 2023; Wang et al., 2023a), these improvements alone may not be sufficient to fully handle queries that require the deep domain knowledge or precise understanding of complex database schemas. In other words, the internal parametric knowledge of LLMs, while robust, may not fully encompass the diverse range of query variations and database structures, especially when these databases have distinct schemas or certain specialized terminology. Knowledge-Augmented Text-to-SQL To tackle the aforementioned limitations, we focus on aug- menting text-to-SQL with the knowledge relevant to the query, providing valuable insights into the domain-specific terminology and complex database schemas. If we denote this knowledge as k, then the previous text-to-SQL process is redefined to incorporate it, as follows: s=LLMθ(T(q,k,D)). While there have been few studies that explore this knowledge-augmented text-to-SQL paradigm, there are still a couple of challenges. Specifically, Dou et al. (2022) and Li et al. (2023) propose col- lecting and annotating the explicit knowledge re- quired to convert queries into SQL statements. Yet, to operationalize, this annotation-based approach can be costly and time-consuming, especially when dealing with a large number of diverse queries. On the other hand, Hong et al. (2024) propose an auto- matic generation of knowledge, based on the ques- tion and its associated database schema. However, this method is still limiting as it generates only a few pieces of knowledge for each query without leveraging the potential for reuse. In contrast, since much of the knowledge used for one query can be applicable to multiple similar queries (See Figure 1, Right), we aim to design a more effective approach for knowledge augmentation, discussed below. 3.2 Knowledge Base Construction To address the aforementioned limitations of ex- isting approaches in knowledge augmentation for text-to-SQL, we propose a novel approach to auto- matically construct a comprehensive and reusable knowledge base. Ideally, this can serve as a founda- tional resource, encapsulating diverse domain infor- mation and offering insights into various database schemas, to enhance the understanding of queries and their associated database structures. Formally, we design this knowledge
https://arxiv.org/abs/2505.22096v1
base Kas a collection of knowledge entries, each represented as a concise sentence, denoted as follows: k∈ K. For instance, in the medical domain, one knowl- edge entry might be “Abnormal white blood cell count refers to WBC ≤3.5 or WBC ≥9.0”, which describes the abnormal range of white blood cell counts and its corresponding column name “WBC” in the database schema, applicable to queries re- lated to abnormal white blood cells. The next ques- tion to answer is then how to construct this knowl- edge base based on the available resources. In this work, we start with collecting all the exist- ing knowledge entries from the publicly available dataset (Li et al., 2023), which includes the knowl- edge and its related pair of query and database schema. Yet, while this initial collection can serveas the foundational layer of our knowledge base, it may not capture the full scope of the required information. To address this gap, we propose an au- tomatic knowledge base expansion technique that leverages LLMs, which possess domain-specific knowledge and the ability to comprehend the given context (including instructions, codes, and database structures) by generating additional knowledge en- tries. Specifically, given the query and its associ- ated database schema from the available datasets, we prompt LLMs (along with a prompting template Tfor knowledge generation) to produce the knowl- edge, formulated as follows: k=LLM(T(q,D)), and then store this knowledge kinto the knowledge baseK. In addition, as it may be more accurate and reliable to provide the LLM with relevant ex- amples (which can help it understand the context, nuances, and expectations of the desired output), we further prepend the small number of relevant examples into the prompt of LLM. It is worth noting that these examples are comprised of the triplets of the user queries, their associated database schemas, and the knowledge they are derived from, and that those triplets come from the existing dataset (used to construct the initial knowledge base). Also, we select only those highly relevant to the query based on its embedding-level cosine similarities with sam- ples from the existing dataset, calculated by MP- Net (Song et al., 2020). This process can ultimately enable the LLM to generate more precise and con- textually appropriate knowledge for text-to-SQL. In addition to this relevant example-based knowl- edge generation approach, to further enrich the di- versity and comprehensiveness of the knowledge base, we implement a simple yet effective strategy that involves sampling and permutation of few-shot examples provided to the LLM. Specifically, for the given query and its associated database schema, instead of generating their corresponding knowl- edge only once, we iteratively sample a different set of relevant examples (provided to contextualize the LLM) multiple times and further permute their order. This can allow the LLM to explore differ- ent contextual nuances and generate a wider range of knowledge entries, with the goal of ultimately increasing the robustness and applicability of the knowledge base for a broader range of queries. 3.3 Text-to-SQL with Knowledge Base Based on the LLM-powered knowledge base con- struction process, we now have the
https://arxiv.org/abs/2505.22096v1
knowledge base K. Hereafter, the next question to answer is then Algorithm 1 Knowledge-Augmented Text-to-SQL Require: Dataset Dcontaining query-schema pairs (q,D); LLM model LLM; Prompt templates T Ensure: SQL statement sfor a given query q 1:Phase 1: Knowledge Base Construction 2:K ← {} ∪ D ▷Initialize knowledge base 3:for all (q,D)∈Ddo 4: E ← Retrieve top- krelevant examples from D 5: knew←LLM (Tgen(q,D,E))▷Generate knowledge 6: K ← K ∪ knew ▷Store knowledge 7:end for 8:Phase 2: Knowledge-Augmented SQL Generation 9:function KAT-SQL( q,D,K) 10: {ki}j i=1←Retrieve top- jknowledge from K 11: k′←LLM (Tref(q,{ki}j i=1,D))▷Refine knowledge 12: s←LLM (Ttext-to-SQL (q,k′,D)) ▷Generate SQL 13: return s 14:end function Figure 2: A simplified overview of the proposed KAT-SQL method. Please see Algorithms 2 and 3 for detailed versions. how to use this knowledge base for text-to-SQL. Given the extensive nature of K, containing a large number of entries, it is crucial to identify and retrieve the most pertinent entries for the query q. Formally, this process can be represented as fol- lows:{ki}j i=1=Retriever (q,K). Also, this can be operationalized by calculating the embedding- level similarities between the query and all the knowledge entries in the knowledge base, then se- lecting the top- jsimilar entries {ki}j i=1, where em- beddings are obtained from a sentence embedding model (Karpukhin et al., 2020; Song et al., 2020). Moreover, to further enhance the retrieval accuracy, we train this embedding model with contrastive learning, which maximizes the similarity between the query and its relevant knowledge while min- imizing the similarities of others, denoted as fol- lows: −logexp( sim(q,k+)/τ) exp( sim(q,k+)/τ)+P k−exp( sim(q,k−)/τ), where sim(q,k)denotes the similarity measure be- tween query qand knowledge k,τis the tempera- ture parameter, k+is the relevant knowledge, and k−represents the set of irrelevant knowledge. Note that while the retrieved knowledge entries fromKare relevant to the given query and can assist in SQL statement formulation, they may re- quire additional refinement to perfectly align with the query’s specific needs. For instance, if the user query pertains to abnormal data conditions, but the retrieved knowledge primarily focuses on normal data, a direct application of this knowledge could lead to inaccurate SQL generation. To address this issue, we further prompt the LLM to generate the knowledge tailored to the given query by consid- ering its relevant knowledge entries and databaseschema, as follows: k′=LLM(T(q,{ki}j i=1,D)), where{ki}j i=1is the knowledge retrieved from K. This refined knowledge k′is subsequently used as input, along with the user query and its associated database schema, to guide the text-to-SQL LLM in generating a more accurate and contextually ap- propriate SQL statement: s=LLM(T(q,k′,D)). Please see Algorithm 1 for our overall approach. 4 Experimental Setup 4.1 Datasets and Tasks Datasets To validate the efficacy of KAT-SQL, we first use two widely used text-to-SQL bench- mark datasets, namely BIRD (Li et al., 2023) and Spider (Yu et al., 2018). Specifically, BIRD is a recently released large-scale text-to-SQL dataset, built on top of 95 distinct databases spanning 37 do- mains. Additionally, each query in this dataset is as- sociated with knowledge that is manually annotated by humans, providing a useful
https://arxiv.org/abs/2505.22096v1
prior for formulat- ing SQL statements. Spider is another benchmark dataset, built upon 200 databases across 138 do- mains. Unlike BIRD, samples in Spider do not have annotated knowledge for text-to-SQL. Lastly, we consider a challenging real-world text-to-SQL data, namely CSTINSIGHT, which is designed with ac- tual customer queries over a data lakehouse with 34 tables without human-annotated knowledge. Tasks/Scenarios We evaluate our KAT-SQL on three realistic text-to-SQL tasks. First of all, we consider the scenario where the prior information about some samples and their associated knowl- edge for each database is available, meaning that the databases used in training samples overlap with those in test samples (Overlap). We note that this setting is practical, since annotating a few pairs of questions and their corresponding knowledge for each database in advance is feasible. In addition to this, we test KAT-SQL with the existing benchmark setup, which is more challenging since it assumes there are no overlaps between databases during the training and test phases (Non-Overlap). In other words, no samples from the test-time databases are available beforehand, which means the model should be able to generalize to test-time queries based on the schemas of test-time databases as well as the samples and their associated knowledge from the different (training-time) databases. Lastly, we validate KAT-SQL on the most challenging sce- nario, where there are no overlaps between the Table 1: Main results on text-to-SQL benchmark datasets across multiple scenarios, with the best results in bold. BIRD (Overlap) BIRD (Non-Overlap) Spider CSTINSIGHT Methods EX VES EX VES EX VES EX VES No Knowledge 23.76 28.81 20.66 16.72 70.99 37.53 4.76 5.28 DELLM 34.70 33.15 24.64 19.27 72.44 42.90 11.90 12.02 KAT-SQL (Ours) 41.18 41.33 41.07 31.14 74.56 47.20 14.29 14.50 Oracle Knowledge 54.67 49.71 49.41 37.93 N/A N/A N/A N/A databases used during training and testing, but also no knowledge is available for both training and test samples. This setup aims to test the model’s ability to generalize (in the absence of any prior knowledge about the dataset), allowing us to evalu- ate how well our knowledge base constructed with one dataset performs on different datasets. Notably, since the Spider and CSTINSIGHT datasets have no available knowledge for all queries, we use them for the most challenging last scenario; meanwhile, we use the BIRD dataset for the first two scenarios. 4.2 Baselines and Our Model We compare our KAT-SQL approach against rele- vant baselines that target our primary objective of improving knowledge-augmented text-to-SQL sys- tems, which vary in their usage of knowledge. We note that for the fairest comparison, we fix the LLM as the same for all methods, explained as follows: 1.No Knowledge – which uses only the queries themselves to formulate the SQL statements with- out any additional knowledge. 2. DELLM – which generates the knowledge based on the query and its relevant database structures, and use this syn- thesized knowledge for text-to-SQL (Hong et al., 2024). 3. KAT-SQL – which is our model, building the knowledge base and utilizing the knowledge from it (with retrieval) for text-to-SQL. 4. Oracle Knowledge –
https://arxiv.org/abs/2505.22096v1
which uses oracle knowledge anno- tated by humans, along with the queries to generate the SQL statements. This approach serves as an upper bound and is not directly comparable to other models due to its reliance on accurate, manually curated knowledge that is typically unavailable. 4.3 Evaluation Metrics Following the standard evaluation protocols from prior work (Li et al., 2023; Hong et al., 2024), we use the following two metrics: 1) Execution Accu- racy (EX), which measures the ratio of generated SQL code that has the same execution results with ground-truth SQL code; 2) Valid Efficiency Score (VES), which considers the efficiency of generated 135 10 20 30 Sampling Steps0.300.350.400.450.500.55Exact Match Overlap 135 10 20 30 Sampling Steps0.050.100.150.200.250.30Exact Match Non-Overlap 0.840.860.880.900.920.94 Semantic Similarity 0.740.760.780.800.820.84 Semantic Similarity Figure 3: Results for coverage and relevance of knowledge entries in the constructed knowledge base against gold knowl- edge, with different numbers of knowledge generation steps. SQLs by weighting them based on their relative efficiency improvement over ground-truth SQLs further multiplied by execution accuracy. 4.4 Implementation Details We mainly use Llama-3 70B (AI@Meta, 2024) as the basis for text-to-SQL generation and knowl- edge generation across all baselines and our model variants for most experiments, for a fair compar- ison, while we also experiment with other LLMs in an analysis (Table 6) to see the robustness of KAT-SQL. For the hyperparameters, except for the temperature (which we set as 0.0 for reproducibil- ity), we use its default values. In addition, for the retriever, we use MPNet (Song et al., 2020), which is based on dense retrieval; we train it with a batch size of 128 and a number of training epochs of 30. We provide the detailed prompts used to elicit the knowledge and SQL generations in Appendix A. 5 Experimental Results and Analyses Main Results We provide main results in Table 1, which confirms that our KAT-SQL approach consis- tently outperforms all baselines by large margins. Specifically, while we observe some performance improvement of the knowledge-augmented text-to- SQL approach (namely DELLM, which generates a few pieces of knowledge for each query) over the baseline without knowledge augmentation, KAT- SQL achieves even greater gains, demonstrating the effectiveness of our knowledge base construction- based text-to-SQL paradigm. However, the perfor- mance of the (incomparable) model with the oracle knowledge (annotated by human experts) remains Table 2: Results for knowledge generation with and without the use of the Knowledge Base (KB), while varying the prompt construction with and without the relevant few-shot examples. Overlap Non-Overlap KB Few-Shot EM SS EM SS w/o KBRandom 10.96 68.77 7.88 66.77 Retrieval 20.21 73.62 9.24 68.78 w/ KBRandom 11.13 69.14 7.93 66.80 Retrieval 24.97 77.87 12.94 71.24 superior to all other approaches, which suggests po- tential future opportunities for developing a more advanced pipeline for knowledge generation. Analysis on Knowledge Base To further under- stand the coverage and relevance of the knowledge within our knowledge base, we compare each piece of knowledge required for test-time queries with all the available entries in the knowledge base, as a function of the number of knowledge generation
https://arxiv.org/abs/2505.22096v1
steps during knowledge base construction. For eval- uation, we use two metrics: Exact Match, which identifies whether the knowledge base contains an entry that precisely matches the knowledge re- quired for a given query, and Semantic Similarity, which assesses how closely related the most simi- lar entry (in the knowledge base) is to the required knowledge based on the embedding-level similar- ity. As shown in Figure 3, we observe that, under the Overlap setting, half of the knowledge entries needed for test-time queries are available in the knowledge base while the Semantic Similarity is around 90%, which demonstrates substantial cov- erage by our knowledge base. In addition, for the challenging setup where training and test databases are distinct, we still observe that 20% of the test- time knowledge entries are available in the knowl- edge base and that the Semantic Similarity exceeds 80%, showing the utility of our knowledge base. Fi- nally, as we increase the number of knowledge gen- eration steps for each instance during knowledge base construction, we observe a corresponding im- provement in both coverage and relevance of our knowledge base, which supports the effectiveness of our expansion strategy to enrich its diversity. Analysis on Knowledge Generation Recall that we further refine the retrieved knowledge to make it more suitable for each query, in addition to con- structing the knowledge base and retrieving the rel- evant knowledge. Thus, to see how relevant the gen- erated knowledge is to the human-annotated gold knowledge with regards to the use of our knowl- edge base, we report comparison results accordingTable 3: Text-to-SQL results without using any knowledge, based on the retrieved knowledge, and based on the refined knowledge from the retrieved knowledge (Our KAT-SQL). Settings Models EX OverlapKAT-SQL (Ours) 41.18 w/o Generation 38.94 w/o Retrieval & Generation 23.76 Non-OverlapKAT-SQL (Ours) 41.07 w/o Generation 38.42 w/o Retrieval & Generation 20.66 Table 4: Retrieval results with different scenarios and models. Settings Models MRR Top@3 Top@10 OverlapBERT 0.5506 0.6621 0.8911 TAS-B 0.5630 0.6943 0.9035 TAS-B* 0.8288 0.9143 0.9765 Non-OverlapBERT 0.2148 0.2692 0.4231 TAS-B 0.2364 0.3846 0.4615 TAS-B* 0.7565 0.8347 0.9210 to Exact Match and Semantic Similarity (SS) in Table 2. We observe that when we retrieve the rele- vant knowledge from the knowledge base and then use it for knowledge generation, there are perfor- mance gains over the case where we do not leverage it, which indicates that the retrieved knowledge is helpful in formulating the necessary knowledge for test-time queries. We also provide few-shot exam- ples to guide the knowledge generation model in generating useful knowledge in the right format, and when we select them based on their similarities with the given query, we observe further gains in the quality of the generated knowledge. Beyond evaluating the quality of the generated knowledge by comparing it to the human-annotated gold knowledge, we also examine the impact of knowledge generation on downstream text-to-SQL performance with and without the incorporation of generated knowledge. As shown in Table 3, com- pared to the results without the knowledge retrieval and generation on both Overlap and Non-Overlap settings, there are
https://arxiv.org/abs/2505.22096v1
substantial improvements when we incorporate the retrieved knowledge from our knowledge base into the text-to-SQL generation process. Furthermore, instead of directly using the retrieved knowledge, refining this retrieved knowl- edge yields additional improvements, underscor- ing the importance of not only retrieving relevant knowledge but also tailoring it to better align with the specific needs of test-time queries. Retrieval Analysis We also analyze the accuracy of knowledge retrieval from our knowledge base by reporting its retrieval performance in Table 4 according to Mean Reciprocal Rank (MRR) and Table 5: Breakdown text-to-SQL results into overlapping and non-overlapping domain settings between training (knowledge base construction) and test (text-to-SQL evaluation) databases. Models Overlap Non-Overlap No Knowledge 22.85 16.20 DELLM 27.20 19.43 KAT-SQL (Ours) 49.37 24.19 Top@K Accuracy. We observe that the retrieval ac- curacy on the Overlap setting is higher than that on the Non-Overlap setting, due to the less availability of relevant knowledge required for test-time queries in the Non-Overlap setting. Yet, when we replace the knowledge base constructed from our approach with the Oracle knowledge base (*), which includes all the necessary knowledge for test-time queries, the MRR on both settings reaches around 80%, in- dicating the importance of expanding the coverage of the knowledge base for accurate knowledge re- trieval. The table also compares the performance of different basis models for retrieval – BERT (Devlin et al., 2019) and TAS-B (Hofstätter et al., 2021) – with the latter being fine-tuned for retrieval. It can be seen that the extra training of the model on re- trieval tasks aids in achieving superior performance for retrieving the knowledge for text-to-SQL. Generalization Analysis to Different Domains To see whether our knowledge base can be gener- alizable to databases of different domains (that are not overlapped with those for knowledge base con- struction), we breakdown the performance based on whether test databases share domains with training databases or belong to different domains (accord- ing to 37 domains categorized from Li et al. (2023)). As shown in Table 5, our KAT-SQL achieves sub- stantially higher performance when test databases overlap with training domains compared to those from unseen domains; however, even in the latter case, KAT-SQL still outperforms existing baselines. These results indicate that, while the lack of domain overlaps degrades the performance, our knowledge base still provides meaningful benefits for unseen domains, demonstrating its generalizability. Analysis with Different LLMs To evaluate how robust our KAT-SAL approach is across different LLMs, we conduct the additional analysis instan- tiating the text-to-SQL and knowledge generation models with other recent LLMs such as Granite 34B (Mishra et al., 2024) and Mixtral 8x7B (Jiang et al., 2024); results are shown in Table 6. From this, we observe that KAT-SQL consistently out-Table 6: Text to SQL results with different LLMs. LLMs Methods Overlap Non-Overlap LlamaNo Knowledge 23.76 20.66 DELLM 34.70 24.64 KAT-SQL 41.18 41.07 Oracle Knowledge 54.67 49.41 GraniteNo Knowledge 25.83 17.75 DELLM 34.04 20.21 KAT-SQL 39.28 35.83 Oracle Knowledge 46.56 38.32 MixtralNo Knowledge 11.75 10.58 DELLM 27.17 11.29 KAT-SQL 29.31 20.30 Oracle Knowledge 37.26 30.88 Table 7: Results of our KAT-SQL approach
https://arxiv.org/abs/2505.22096v1
with the state-of- the-art text-to-SQL model on the BIRD leaderboard. Models EX ChatGPT 24.05 ChatGPT + CoT 25.88 ExSL + granite-20b-code 51.69 ExSL + granite-20b-code w/ KAT-SQL (Ours) 57.56 ExSL + granite-20b-code w/ Oracle Knowledge 65.38 performs all baselines regardless of the choice of LLMs, which demonstrates the effectiveness and versatility of our proposed approach. Finally, we augment the state-of-the-art text-to- SQL model (in the setting without oracle knowl- edge) on the BIRD leaderboard (Li et al., 2023), namely ExSL + granite-20b-code, using the knowl- edge generated from our proposed knowledge base construction-based approach. As shown in Table 7, we observe that the text-to-SQL model combined with our KAT-SQL approach establishes the new state-of-the-art performance, highlighting the value of our method as a powerful tool for text-to-SQL. Analysis on Efficiency While our primary focus is on improving the text-to-SQL accuracy through knowledge base construction and augmentation, we also consider the efficiency of our approach. It is worth noting that the construction of the knowledge base is performed offline and does not affect real- time query processing; therefore, the extra compu- tational overhead comes from retrieving relevant knowledge and generating the SQL statement in re- sponse to the query. In this regard, our retrieval pro- cess accounts for only 2% of the overall generation time, thanks to efficient search algorithm (Douze et al., 2024), making its impact negligible. Also, although incorporating knowledge into the text-to- SQL pipeline increases the prompt length by 30%, this overhead aligns with other knowledge augmen- tation methods (such as DELLM) and does not in- troduce additional latency specific to our approach. Overall, each query is processed under 5 seconds. Table 8: Case study of knowledge and SQL generation results from three queries. Example Query Generated Knowledge and SQL Example 1Which position has a lower minimum salary, Account Representative or Trainee?Generated Knowledge: 1) lower minimum salary refers to MIN(minsalary) 2) Account Representative and Trainee are position titles Generated SQL: SELECT positiontitle FROM position WHERE positiontitle = ’Account Representative’ OR positiontitle = ’Trainee’ ORDER BY minsalary ASC LIMIT 1 Example 2Among the employees work- ing at the office in New York, how many of them have a good job performance?Generated Knowledge: 1) New York refers to state = ’NY’ 2) good job performance refers to performance = ’Good’ Generated SQL: SELECT COUNT(*) FROM employee AS T1 INNER JOIN location AS T2 ON T1.locationID = T2.locationID WHERE T2.state = ’NY’ AND T1.performance = ’Good’ Example 3For all patients born in 1982, state if their albumin is within normal range.Generated Knowledge: 1) albumin is within normal range refers to ALB between 3.5 and 5.5 Generated SQL: SELECT CASE WHEN T2.ALB >= 3.5 AND T2.ALB <= 5.5 THEN ’normal’ ELSE ’abnormal’ END FROM Patient AS T1 INNER JOIN Laboratory AS T2 ON T1.ID = T2.ID WHERE STRFTIME(’%Y’, T1.Birthday) = ’1982’ Table 9: Examples of original and its (similar) constructed knowledge within the knowledge base. Example Original Knowledge Constructed Similar Knowledge Example 1albumin is within normal range refers to ALB between 3.5 and 5.51) albumin is outside the normal range refers to ALB less
https://arxiv.org/abs/2505.22096v1
than 3.5 or greater than 5.5 2) glucose is within normal range refers to GLU between 70 and 100 mg/dL 3) Hemoglobin (Hb) is considered normal for males if levels range from 13.5 to 17.5 g/dL Example 2Eligible free rate for K-12 = Free Meal Count (K-12) / En- rollment (K-12)1) Eligible reduced-price rate for K-12 = Reduced-Price Meal Count (K-12) / Enrollment (K-12) 2) Eligible free meal rate for students aged 5-17 = Free Meal Count (Ages 5-17) / Enrollment (Ages 5-17) 3) Difference between K-12 and ages 5-17 enrollment = Enrollment (K-12) - Enrollment (Ages 5-17) Example 3Slovakia can be represented asCountry = ’SVK’1) France can be represented as Country = ’FRA’ 2) Brazil can be represented as Country = ’BRA’ 3) Monaco can be represented as Country = ’MCO’ Examples We provide examples for the knowl- edge generation and text-to-SQL in Table 8 as well as the entries in the knowledge base in Table 9. 6 Conclusion In this work, we proposed a novel knowledge base construction-based text-to-SQL approach called KAT-SQL, based on the motivation that one piece of knowledge can be reused across multiple queries and databases. Our approach involves the creation of the knowledge base from which relevant knowl- edge is retrieved and utilized to generate SQL state- ments from queries. Through extensive evaluations on multiple datasets with two different scenarios, we showed that our KAT-SQL outperforms rele- vant knowledge-augmented text-to-SQL baselines. In addition, our detailed analyses highlight the ef- fectiveness of each component in the knowledge generation and retrieval processes, but also the high coverage and relevance of the entries in the base. Limitations In this work, we propose constructing a knowledge base and then leveraging it for text-to-SQL tasks,showcasing the clear advantages of constructing the knowledge base for text-to-SQL. However, as the performance gaps between the models with oracle knowledge and the generated knowledge from our knowledge base indicate, there is still room to improve the coverage of the knowledge base with advanced knowledge base construction methods, which is a promising area for future work. Ethics Statement We recognize that any text-to-SQL system, includ- ing our proposed approach, may carry the inherent risk of generating SQL queries that may inadver- tently or intentionally access, modify, or delete sensitive information within a database. While this vulnerability is not exclusive to our method and is a well-known challenge in the broader field of text- to-SQL systems, it underscores the importance of implementing robust security measures and access controls before deploying such systems. Similar to this, safety is particularly crucial in our application, so as to avoid the risk of sensitive information be- ing stored in the knowledge base and subsequently being inappropriately reused. References AI@Meta. 2024. Llama 3 model card. Dimitrios Alivanistos, Selene Baez Santamaría, Michael Cochez, Jan-Christoph Kalo, Emile van Krieken, and Thiviyan Thanapalasingam. 2022. Prompting as probing: Using language models for knowledge base construction. In Proceedings of the Seman- tic Web Challenge on Knowledge Base Construction from Pre-trained Language Models 2022 co-located with the 21st International Semantic Web Confer- ence (ISWC2022), Virtual Event, Hanghzou, China,
https://arxiv.org/abs/2505.22096v1
October 2022 , volume 3274 of CEUR Workshop Pro- ceedings , pages 11–34. CEUR-WS.org. Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean- Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M. Dai, Anja Hauth, Katie Mil- lican, David Silver, Slav Petrov, Melvin Johnson, Ioannis Antonoglou, Julian Schrittwieser, Amelia Glaese, Jilin Chen, Emily Pitler, Timothy P. Lilli- crap, Angeliki Lazaridou, Orhan Firat, James Molloy, Michael Isard, Paul Ronald Barham, Tom Henni- gan, Benjamin Lee, Fabio Viola, Malcolm Reynolds, Yuanzhong Xu, Ryan Doherty, Eli Collins, Clemens Meyer, Eliza Rutherford, Erica Moreira, Kareem Ayoub, Megha Goel, George Tucker, Enrique Pi- queras, Maxim Krikun, Iain Barr, Nikolay Savinov, Ivo Danihelka, Becca Roelofs, Anaïs White, Anders Andreassen, Tamara von Glehn, Lakshman Yagati, Mehran Kazemi, Lucas Gonzalez, Misha Khalman, Jakub Sygnowski, and et al. 2023. Gemini: A family of highly capable multimodal models. arXiv preprint arXiv:2312.11805 . Ruichu Cai, Boyan Xu, Zhenjie Zhang, Xiaoyan Yang, Zijian Li, and Zhihao Liang. 2018. An encoder- decoder framework translating natural language to database queries. In Proceedings of the Twenty- Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stock- holm, Sweden , pages 3977–3983. ijcai.org. Shuaichen Chang and Eric Fosler-Lussier. 2023. How to prompt llms for text-to-sql: A study in zero-shot, single-domain, and cross-domain settings. arXiv preprint arXiv:2305.11853 . Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers) , pages 4171–4186. Association for Computational Linguistics. Xuemei Dong, Chao Zhang, Yuhang Ge, Yuren Mao, Yunjun Gao, Lu Chen, Jinshu Lin, and Dongfang Lou. 2023. C3: zero-shot text-to-sql with chatgpt. arXiv preprint arXiv:2307.07306 .Longxu Dou, Yan Gao, Xuqi Liu, Mingyang Pan, Dingzirui Wang, Wanxiang Che, Dechen Zhan, Min- Yen Kan, and Jian-Guang Lou. 2022. Towards knowledge-intensive text-to-sql semantic parsing with formulaic knowledge. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022 , pages 5240–5253. Association for Computational Linguis- tics. Matthijs Douze, Alexandr Guzhva, Chengqi Deng, Jeff Johnson, Gergely Szilvasy, Pierre-Emmanuel Mazaré, Maria Lomeli, Lucas Hosseini, and Hervé Jégou. 2024. The faiss library. arXiv preprint arXiv:2401.08281 . Dawei Gao, Haibin Wang, Yaliang Li, Xiuyu Sun, Yichen Qian, Bolin Ding, and Jingren Zhou. 2024. Text-to-sql empowered by large language models: A benchmark evaluation. Proc. VLDB Endow. , 17(5):1132–1145. Zihui Gu, Ju Fan, Nan Tang, Lei Cao, Bowen Jia, Sam Madden, and Xiaoyong Du. 2023. Few-shot text-to- sql translation using structure and content prompt learning. Proc. ACM Manag. Data , 1(2):147:1– 147:28. Sebastian Hofstätter, Sheng-Chieh Lin, Jheng-Hong Yang, Jimmy Lin, and Allan Hanbury. 2021. Effi- ciently teaching an effective dense retriever with bal- anced topic aware sampling. In SIGIR ’21: The 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, Virtual Event, Canada, July 11-15, 2021 , pages 113–122. ACM. Zijin Hong, Zheng Yuan, Hao Chen, Qinggang Zhang, Feiran Huang,
https://arxiv.org/abs/2505.22096v1
and Xiao Huang. 2024. Knowledge- to-sql: Enhancing SQL generation with data expert LLM. arXiv preprint arXiv:2402.11517 . Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bam- ford, Devendra Singh Chaplot, Diego de Las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie- Anne Lachaux, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. 2024. Mix- tral of experts. arXiv preprint arXiv:2401.04088 . Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick S. H. Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Nat- ural Language Processing, EMNLP 2020, Online, November 16-20, 2020 , pages 6769–6781. Associa- tion for Computational Linguistics. Dongjun Lee, Choongwon Park, Jaehyuk Kim, and Heesoo Park. 2024. MCS-SQL: leveraging multiple prompts and multiple-choice selection for text-to-sql generation. arXiv preprint arXiv:2405.07467 . Jinyang Li, Binyuan Hui, Ge Qu, Jiaxi Yang, Binhua Li, Bowen Li, Bailin Wang, Bowen Qin, Ruiying Geng, Nan Huo, Xuanhe Zhou, Chenhao Ma, Guoliang Li, Kevin Chen-Chuan Chang, Fei Huang, Reynold Cheng, and Yongbin Li. 2023. Can LLM already serve as A database interface? A big bench for large- scale database grounded text-to-sqls. In Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Sys- tems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023 . Xiping Liu and Zhao Tan. 2023. Divide and prompt: Chain of thought prompting for text-to-sql. arXiv preprint arXiv:2304.11556 . Mayank Mishra, Matt Stallone, Gaoyuan Zhang, Yikang Shen, Aditya Prasad, Adriana Meza Soria, Michele Merler, Parameswaran Selvam, Saptha Surendran, Shivdeep Singh, Manish Sethi, Xuan-Hong Dang, Pengyuan Li, Kun-Lung Wu, Syed Zawad, Andrew Coleman, Matthew White, Mark Lewis, Raju Pavu- luri, Yan Koyfman, Boris Lublinsky, Maximilien de Bayser, Ibrahim Abdelaziz, Kinjal Basu, Mayank Agarwal, Yi Zhou, Chris Johnson, Aanchal Goyal, Hima Patel, S. Yousaf Shah, Petros Zerfos, Heiko Ludwig, Asim Munawar, Maxwell Crouse, Pavan Kapanipathi, Shweta Salaria, Bob Calio, Sophia Wen, Seetharami Seelam, Brian Belgodere, Carlos A. Fon- seca, Amith Singhee, Nirmit Desai, David D. Cox, Ruchir Puri, and Rameswar Panda. 2024. Granite code models: A family of open foundation models for code intelligence. arXiv preprint arXiv:2405.04324 . Subhabrata Mukherjee, Arindam Mitra, Ganesh Jawa- har, Sahaj Agarwal, Hamid Palangi, and Ahmed Awadallah. 2023. Orca: Progressive learning from complex explanation traces of GPT-4. arXiv preprint arXiv:2306.02707 . Anmol Nayak and Hariprasad Timmapathini. 2023. LLM2KB: constructing knowledge bases using in- struction tuned context aware large language mod- els. In Joint proceedings of the 1st workshop on Knowledge Base Construction from Pre-Trained Lan- guage Models (KBC-LM) and the 2nd challenge on Language Models for Knowledge Base Construction (LM-KBC) co-located with the 22nd International Semantic Web Conference (ISWC 2023), Athens, Greece, November 6, 2023 , volume 3577 of CEUR Workshop Proceedings . CEUR-WS.org. OpenAI. 2023. GPT-4 technical report. arXiv preprint arXiv:2303.08774 . Mohammadreza Pourreza and Davood Rafiei. 2023. DIN-SQL: decomposed in-context learning of text- to-sql with self-correction.
https://arxiv.org/abs/2505.22096v1
In Advances in Neural Information Processing Systems 36: Annual Confer- ence on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023 .Nitarshan Rajkumar, Raymond Li, and Dzmitry Bah- danau. 2022. Evaluating the text-to-sql capabil- ities of large language models. arXiv preprint arXiv:2204.00498 . Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie- Yan Liu. 2020. Mpnet: Masked and permuted pre- training for language understanding. In Advances in Neural Information Processing Systems 33: An- nual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual . Shivchander Sudalairaj, Abhishek Bhandwaldar, Aldo Pareja, Kai Xu, David D. Cox, and Akash Srivas- tava. 2024. LAB: large-scale alignment for chatbots. arXiv preprint arXiv:2403.01081 . Chang-Yu Tai, Ziru Chen, Tianshu Zhang, Xiang Deng, and Huan Sun. 2023. Exploring chain of thought style prompting for text-to-sql. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, De- cember 6-10, 2023 , pages 5376–5393. Association for Computational Linguistics. Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. Stanford alpaca: An instruction-following llama model. https:// github.com/tatsu-lab/stanford_alpaca . Blerta Veseli, Simon Razniewski, Jan-Christoph Kalo, and Gerhard Weikum. 2023. Evaluating the knowl- edge base completion potential of GPT. In Find- ings of the Association for Computational Linguis- tics: EMNLP 2023, Singapore, December 6-10, 2023 , pages 6432–6443. Association for Computational Linguistics. Denny Vrandecic and Markus Krötzsch. 2014. Wiki- data: a free collaborative knowledgebase. Commun. ACM , 57(10):78–85. Bing Wang, Changyu Ren, Jian Yang, Xinnian Liang, Jiaqi Bai, Qian-Wen Zhang, Zhao Yan, and Zhou- jun Li. 2023a. MAC-SQL: A multi-agent collab- orative framework for text-to-sql. arXiv preprint arXiv:2312.11242 . Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V . Le, Ed H. Chi, Sharan Narang, Aakanksha Chowd- hery, and Denny Zhou. 2023b. Self-consistency improves chain of thought reasoning in language models. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023 . OpenReview.net. Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. 2023c. Self-instruct: Aligning language models with self-generated instructions. In Proceed- ings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), ACL 2023, Toronto, Canada, July 9-14, 2023 , pages 13484–13508. Association for Computational Linguistics. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V . Le, and Denny Zhou. 2022. Chain-of-thought prompting elicits reasoning in large language models. In Ad- vances in Neural Information Processing Systems 35: Annual Conference on Neural Information Process- ing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022 . Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. 2023. Wizardlm: Empowering large lan- guage models to follow complex instructions. arXiv preprint arXiv:2304.12244 . Xiaojun Xu, Chang Liu, and Dawn Song. 2017. Sql- net: Generating structured queries from natural lan- guage without reinforcement learning. arXiv preprint arXiv:1711.04436 . Navid Yaghmazadeh, Yuepeng Wang,
https://arxiv.org/abs/2505.22096v1
Isil Dillig, and Thomas Dillig. 2017. Sqlizer: query synthesis from natural language. Proc. ACM Program. Lang. , 1(OOPSLA):63:1–63:26. Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, Zilin Zhang, and Dragomir R. Radev. 2018. Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018 , pages 3911–3921. Association for Computational Linguistics. John M. Zelle and Raymond J. Mooney. 1996. Learning to parse database queries using inductive logic pro- gramming. In Proceedings of the Thirteenth National Conference on Artificial Intelligence and Eighth In- novative Applications of Artificial Intelligence Con- ference, AAAI 96, IAAI 96, Portland, Oregon, USA, August 4-8, 1996, Volume 2 , pages 1050–1055. AAAI Press / The MIT Press. A Prompts We provide prompts used to elicit the knowledge generation and the SQL generation in Table 10. B Algorithms We provide the pseudo-code for knowledge base construction in Algorithm 2 and the pseudo-code for our full KAT-SQL approach in Algorithm 3. C Additional Experimental Results Knowledge Base Statistics The resulting knowl- edge base for the database overlapping and non- overlapping scenarios contains 86,254 and 117,328 knowledge entries, respectively, which are greater than the original number of knowledge entries an- notated in the BIRD dataset, which is 12,751. Knowledge Base Construction Cost While the construction of the knowledge base is performed offline and does not impact real-time operations of text-to-SQL, we provide the cost to construct the knowledge base for our KAT2SQL approach to enable researchers to estimate resource require- ments for scaling and implementation. Note that the exact computational costs and time required for knowledge base construction vary depending on hardware types and configurations, and with four H100 GPUs that can process 2K tokens per second and generate 10 tokens per second for Llama 70B, the time required to generate each knowledge entry is around 2 seconds. Therefore, for the knowledge base with 100K entries, the total generation time would be 56 hours divided by the number of paral- lel models (it completes in 7 hours with 8 models). Retrieval over Different Knowledge Sources It is worthwhile to note that, for text-to-SQL tasks, it is crucial to consider the relationship between the query and the database (in addition to the consider- ation of the domain-specific knowledge for domain- specific queries); therefore, using the unstructured knowledge sources (such as web search) may not be optimal for this purpose since they often lack the structured, schema-specific information necessary for accurately formulating SQL queries. Never- theless, to further validate this claim, we perform retrieval over Wikipedia, instead of performing re- trieval over the constructed knowledge base, and observe only the marginal performance gain (3%) compared to the baseline without augmentation. Table 10: A list of prompts that we use for knowledge generation and SQL generation. It is worth noting that the variable inside the parentheses {} is replaced with its actual values. Types Prompts Knowledge GenerationDB Schema:
https://arxiv.org/abs/2505.22096v1
{Database Schema} Question: {Few-Shot Question 1} Evidence: {Few-Shot Evidence 1} Question: {Few-Shot Question 2} Evidence: {Few-Shot Evidence 2} ... Question: {Few-Shot Question 10} Evidence: {Few-Shot Evidence 10} Question: {Target Question} Evidence: SQL GenerationDB Schema: {Database Schema} Question: {Few-Shot Question 1} Evidence: {Few-Shot Evidence 1} SQL: {Few-Shot SQL 1} Question: {Few-Shot Question 2} Evidence: {Few-Shot Evidence 2} SQL: {Few-Shot SQL 2} ... Question: {Few-Shot Question 10} Evidence: {Few-Shot Evidence 10} SQL: {Few-Shot SQL 10} Question: {Target Question} Evidence: {Generated Knowledge} SQL: Algorithm 2 Knowledge Base Construction for KAT-SQL Require: Dataset Dcontaining query-schema-knowledge triplets (q,D,k); Prompt template T Ensure: Knowledge base K 1:K ← {} ▷Initialize an empty knowledge base 2:for all (q,D,k)∈Ddo 3:K ← K ∪ k ▷Add existing knowledge to the knowledge base 4:end for 5:for all query-schema pair (q,D)∈Ddo 6:E ← Top-krelevant examples to the query qfromD 7: fori= 1toNdo ▷Iteratively expand knowledge 8: Eperm←Permute examples E 9: knew←LLM(T(q,D,Eperm)) ▷Generate knowledge using LLM with examples 10: K ← K ∪ knew ▷Store generated knowledge in the knowledge base 11: end for 12:end for Algorithm 3 Knowledge-Augmented Text-to-SQL (KAT-SQL) Require: Query q; Database schema D; Knowledge base K Ensure: SQL statement s 1:function KAT-SQL( q,D,K) 2:{ki}j i=1←RETRIEVER (q,K) ▷Retrieve relevant knowledge entries from K 3:Tref←CREATE PROMPT (q,{ki}j i=1,D) ▷Construct the prompt with retrieved knowledge 4:k′←LLM(Tref) ▷Refine knowledge using LLM 5:Taug←CREATE PROMPT (q,k′,D) ▷Augment the prompt with refined knowledge 6:s←LLM(Taug) ▷Generate SQL with knowledge augmentation 7: return s 8:end function 9:function RETRIEVER (q,K) 10: Compute embeddings for qand all knowledge entries k∈ K 11: Retrieve top- jrelevant knowledge entries {ki}j i=1based on embedding similarities 12: return {ki}j i=1 13:end function 14:function CREATE PROMPT (q,k,D) 15: Construct the prompt template Tusing the query q, knowledge k, and database schema D 16: return T 17:end function
https://arxiv.org/abs/2505.22096v1
arXiv:2505.22104v1 [cs.AI] 28 May 2025Efficient Dynamic Shielding for Parametric Safety Specifications Davide Corsi1, Kaushik Mallik2, Andoni Rodríguez2, and César Sánchez2 1University of California, Irvine, USA dcorsi@uci.edu 2IMDEA Software Institute, Spain {kaushik.mallik,andoni.rodriguez,cesar.sanchez}@imdea.org Abstract. Shielding has emerged as a promising approach for ensuring safety of AI-controlled autonomous systems. The algorithmic goal is to compute a shield, which is a runtime safety enforcement tool that needs to monitor and intervene the AI controller’s actions if safety could be compromised otherwise. Traditional shields are designed statically for a specific safety requirement. Therefore, if the safety requirement changes at runtime due to changing operating conditions, the shield needs to be recomputed from scratch, causing delays that could be fatal. We in- troduce dynamic shields forparametric safety specifications, which are succinctly represented sets of all possible safety specifications that may be encountered at runtime. Our dynamic shields are statically designed for a given safety parameter set, and are able to dynamically adapt as the true safety specification (permissible by the parameters) is revealed at runtime. The main algorithmic novelty lies in the dynamic adapta- tion procedure, which is a simple and fast algorithm that utilizes known features of standard safety shields, like maximal permissiveness. We re- port experimental results for a robot navigation problem in unknown territories, where the safety specification evolves as new obstacles are discovered at runtime. In our experiments, the dynamic shields took a few minutes for their offline design, and took between a fraction of a second and a few seconds for online adaptation at each step, whereas the brute-force online recomputation approach was up to 5 times slower. Keywords: Dynamic shields ·parametric safety ·symbolic control. 1 Introduction Most critical autonomous systems like self-driving cars are nowadays controlled by machine-learned (ML) controllers, and ensuring their safety is an important agenda in artificial intelligence and formal methods research. Unfortunately, the traditional static safety verification tools from formal methods usually do not scale to the size and complexity of ML-based systems. One promising alternative is shielding [4,1,3,13], where we deploy a formally verified runtime enforcement tool—the shield—that monitors the actions of the ML controller and overrides them whenever safety could be at risk. Usually shield synthesis is cheaper than 2 Davide Corsi , Kaushik Mallik , Andoni Rodríguez, and César Sánchez verifying the entire system, because the synthesis happens on a small system ab- straction that concerns only the safety aspects. More importantly, the synthesis process treats the ML controller as a black-box, thereby bypassing the scalabil- ity issues faced by the traditional model-based formal approaches. In the recent past, shielding has been successfully applied in tandem with complex machine- learned controllers in a large variety of applications, including safe human-robot interactions [9] and safe autonomous driving [23]. State-of-the-art shielding approaches offer statically designed shields, crafted for a specific safety objective provided at the design time. In reality, safety speci- fications often vary over time, and there are no principled approaches to dynam- icallyadapt a (statically designed) shield as new safety objectives are uncovered at runtime. For instance, consider a mobile robot placed in a workspace
https://arxiv.org/abs/2505.22104v1
whose map is unknown apriori. The workspace is filled with static obstacles, and the robot must avoid colliding with them at all time. However, the visibility of the robot is limited by the range of its sensors, and therefore it can see the obstacles only when it gets close to them. If the entire map were visible to the shield, it couldusethelocationsoftheobstaclestodefineitssafetyspecification.However, due to limited visibility, the safety specification would only concern the visible immediate neighborhood of the robot, and it would keep changing in real-time as new obstacles are uncovered. We present a novel framework of dynamic shielding with respect to evolving safety specifications. We assume that we are given a perturbed, discrete-time dynamical model of the system, and a parameterized (finite) set of all possible safety specifications that could be encountered at runtime. To be more specific, for the specification part, we are given a finite collection of safety objectives of the form {□Gi}i, called the parameter set , where Giis a set of safe states of the system, and □Giis a linear temporal logic (LTL) formula specifying that Gimust not be left at any time. Each formula □Girepresents an atomicsafety objective, and the actual safety specification encountered at runtime will be the conjunction of an arbitrary subset of atomic safety objectives. The aim is to statically design a shield for the statically provided parameter set {□Gi}i, such that the shield can dynamically adapt itself for every dynamically generated safety specification. Naturally, shielding against parametric safety would require coordination be- tween the offline and online design phases, and the two extreme ends of the coordination spectrum are as follows. The pure offline approach would design one (static) shield for each subset of {□Gi}i, so that the right shield could be deployed at no additional time at runtime. However, this would require solving an exponential number of offline shield synthesis problems which will not scale if the parameter set is large. In contrast, the pure online approach would per- form no computation in the offline phase, and at runtime, whenever a new safety specification is revealed, it would compute a (static) shield to be deployed imme- diately. This would increase the computational delays in the shield deployment, which may not be feasible in systems with fast dynamics. Efficient Dynamic Shielding for Parametric Safety Specifications 3 We present an efficient dynamic shielding algorithm that creates a harmony between the offline and online design phases. In the offline design phase, one (static)atomicshield Siis computed for each atomic safety specification □Gi, solving a linear number of shield synthesis problems as opposed to the exponen- tial case of the pure offline algorithm. In the online deployment phase, as a new safety specification Φ=□Gj∩□Gk∩. . .is encountered, the respective atomic shields Sj,Sk, . . .arecomposed to obtain the shield for the specification Φ. This composition operation is the main technical novelty of this paper. It utilizes sim- ple known features like maximal permissiveness of safety shields, giving rise to a fast composition algorithm involving shield “intersections” followed by iterative deadlock removals. As a result, we obtain
https://arxiv.org/abs/2505.22104v1
a lightweight online adaptation proce- dure that is significantly cheaper than the pure online algorithm, which would instead compute a new shield for Φfrom scratch. We propose abstraction-based synthesis algorithms for our dynamic shields, though other alternatives could also be pursued [28]. Concretely, we first create an abstract model of the system by following standard procedure [24], namely discretizing the system’s state and input space using uniform grids, and then conservatively approximating the system dynamics over the discrete spaces. Af- terwards, we adapt existing abstraction-based synthesis algorithms [24] for the offlinedesignandonlineadaptationofourdynamicshields.Theseproceduresare compatible with symbolic data structures, particularly binary decision diagrams (BDD), giving rise to efficient implementation of our dynamic shields. We also provide practical strategies to address the following safe handover question that naturally arises: if the safety specification evolves at each step, how can we be sure that the current actions of the shield will keep the future system states within the domain of subsequent shield adaptations? We address this question for the specific problem of safe robot navigation in unknown territories, where the shield may encounter previously unknown obstacles from time to time. We propose the most conservative solution to the safe handover problem, namely at each step, the shield needs to assume that the entire unobservable part of the state space is unsafe. This is not as restrictive as it sounds, because the faraway obstacles (in the unobservable part) have hardly any influence on the shield’s actions. As the robot starts moving, states which were earlier assumed unsafe turn out to be actually safe, and it is guaranteed that the future states of the robot will be within the domain of future shield adaptations. Finally, we demonstrate the practical effectiveness and feasibility of the dy- namic shields using a prototype implementation based on the tool Mascot- SDS [15]. The safety rate of the shields were 100%, which is unsurprising since they are correct by construction. Furthermore, the offline design of our dynamic shields ended within minutes, and the online adaption per step on an average took between a fraction of a second upto a few seconds, which was upto 5times faster compared to the pure online baseline. This demonstrates the practical feasibility of our dynamic shields. In summary, our contributions are as follows: 4 Davide Corsi , Kaushik Mallik , Andoni Rodríguez, and César Sánchez (a) We propose the problem of dynamic shielding for evolving safety specifica- tions. (b) We present a novel algorithm for dynamic shielding, which orchestrates of- fline shield synthesis with lightweight online adaptation procedure. (c) We show how our algorithms can be symbolically implemented using the abstraction-based control paradigm. (d) We present a practical approach to address the safe handover question for dynamic shields in navigation tasks. (e) We present the superior computational performance of our dynamic shields using a prototype implementation. Related Works Shielding has become one of the enabling technologies in guaranteeing safety of arbitrarily complex machine-learned controllers in autonomous systems [11,1, 3,13,5]. It has been studied in two operational settings, namely pre-shielding and post-shielding. In pre-shielding, the shield is deployed during the
https://arxiv.org/abs/2505.22104v1