text
string
source
string
functions [ 42], which is defined as a ratio of two tropical polynomials, themselves formed by maxima of affine functions. Closely related to this is the intuitive geometric understanding of consecutive layers of ReLU networks as space folding transformations that allow a compact representation of similarities in the i...
https://arxiv.org/abs/2505.22074v1
has since become a cornerstone for training biologically inspired spiking models in a computationally efficient manner [1, 12, 38, 41]. Recently, a related approach called ProxyGrad [ 22] improved activation maximization (AM) in convolutional networks by manipulating gradients during optimization. The study showed that...
https://arxiv.org/abs/2505.22074v1
cut-off threshold, reducing the network’s propensity to overfit due to topological simplification and sparsity [ 27,29], but without harming the gradient flow. In preliminary studies, we realized the need for adapting the current activation functions to utilize the specific purpose of SUGAR. As it will be discussed in ...
https://arxiv.org/abs/2505.22074v1
proposed B-SiLU and NeLU. See Appendix C for the complete experimental results. 0 20 40 60 80 100 Epoch2.02.53.03.54.0Validation lossnon-SUGAR 0 20 40 60 80 100 EpochSUGAR ELU (52.4±0.4%) GELU (44.3±0.5%) Leaky ReLU (43.4±0.3%) Mish (48.4±0.3%)NeLU (α=0.01) (43.2±0.4%) NeLU (α=0.05) (42.4±0.4%) NeLU (α=0.1) (41.9±0.2%)...
https://arxiv.org/abs/2505.22074v1
we have trained the model on Tiny-ImageNet 200[3], it is considerably over-parameterized for the dataset. The models were adopted in their original form from [ 23] and [ 5]. The only modification applied was changing the activation function with the corresponding SUGAR implementation. When applied to Conv2NeXt, SUGAR c...
https://arxiv.org/abs/2505.22074v1
of the network. The first four convolutional layers of the SUGAR model exhibit slightly flatter, right -skewed distributions whose mode is lower than in the baseline. Hence on average, fewer filters are active for a given image, suggesting that the surrogate -optimized activation encourages selectivity and reduces redu...
https://arxiv.org/abs/2505.22074v1
the activation patterns. We consider the investigated models in this work. The results in Section 4.1 and Section 5.1 use models that are not heavily regularized. In this setting, choosing a surrogate function that deviates considerably from ReLU derivative induces harsher regularization and improves the predictive per...
https://arxiv.org/abs/2505.22074v1
CIFAR-10, CIFAR- 100, and Tiny-ImageNet, in addition to few toy problems. It is uncertain how SUGAR would perform in other domains such as natural language processing, reinforcement learning, or time series modeling, where activation dynamics and gradient propagation can differ substantially. Beyond addressing the curr...
https://arxiv.org/abs/2505.22074v1
and Yarin Gal. Relu to the rescue: improve your on-policy actor-critic with positive advantages. In Proceedings of the 41st International Conference on Machine Learning , ICML’24. JMLR.org, 2024. [14] Nandan Kumar Jha and Brandon Reagen. Relu’s revival: On the entropic overload in normalization-free large language mode...
https://arxiv.org/abs/2505.22074v1
Friedemann Zenke. Surrogate gradient learning in spiking neural networks: Bringing the power of gradient-based optimization to spiking neural networks. IEEE Signal Processing Magazine , 36:51–63, 2019. [31] Prajit Ramachandran, Barret Zoph, and Quoc V Le. Searching for activation functions. In 6th International Confere...
https://arxiv.org/abs/2505.22074v1
weights, the initial activation probabilities tend to be near zero. In that case, in more than 90% of trials, the network fails to learn meaningful representations due to widespread irrevocable neuron inactivity. To conduct the evaluation, four distinct toy datasets in the range of [−1.5,1.5]were adapted from [25] as r...
https://arxiv.org/abs/2505.22074v1
the possibility of the neurons becoming inactive. However, once a neuron is inactive, there is no possibility for the model to reactivate the neuron due to the fact that the proposed method only concerns the initialization. For SUGAR, it is still possible as a results of the enabled gradient flow through that particula...
https://arxiv.org/abs/2505.22074v1
0.1) 69.42±0.34 71 .09±0.59 Swish 73.91±0.49 73 .85±0.22 VGG-16 ELU 78.87±0.43 85.58±0.16 GELU 75.03±0.45 75 .86±0.25 LeakyReLU 75.74±0.71 76 .04±0.26 Mish 76.43±0.30 78 .98±0.33 B-SiLU 78.50±0.25 88 .35±0.26 ReLU 75.85±0.51 —– SELU 78.28±0.34 86.87±0.28 NeLU( α= 0.01) 75.54±0.30 75 .88±0.23 NeLU( α= 0.05) 74.61±0.19 7...
https://arxiv.org/abs/2505.22074v1
count02468 Layer index0.000.020.040.060.080.100.12 Norm. frequency (c) ReLU (CIFAR-100, ResNet-18) 0 20000 40000 60000 80000 Activation count02468 Layer index0.000.010.020.030.040.050.060.070.08 Norm. frequency (d) SUGAR B-SiLU (CIFAR-100, ResNet-18) 0 20000 40000 60000 80000 Activation count02468 Layer index0.000.020....
https://arxiv.org/abs/2505.22074v1
is completely in line with [ 5], we provide detailed specifications regarding the training procedure here. torch.compile is applied with SUGAR. Conv2NeXt was trained on the CIFAR-100 dataset. •Batch size: A per-GPU batch size of 200 was used, with gradient accumulation over 4 step, yielding an effective batch size of 8...
https://arxiv.org/abs/2505.22074v1
arXiv:2505.22086v1 [cs.AR] 28 May 2025iDSE: Navigating Design Space Exploration in High-Level Synthesis Using LLMs Runkai Li∗ Southeast UniversityJia Xiong∗ Southeast UniversityXi Wang† Southeast University Abstract High-Level Synthesis (HLS) serves as an agile hardware development tool that streamlines the circuit des...
https://arxiv.org/abs/2505.22086v1
.cppManual Refinement with Directive Tuning Directive Config rationQoR -Compliant HLS Design.v .vhdlUtil.Lat. QoR Check.c .cpp.c .cpppipline unroll partitionAllocation Scheduling Binding Design Space ExplorationFigure 1: Time-consuming manual directive configuration tuning based on QoR reported in HLS. aimed at reducin...
https://arxiv.org/abs/2505.22086v1
Feature-Driven Pruning approach to significantly expedite the DSE iterations by constructing a compact yet expressive HLS design space. •We reimagine DSE initialization through the LLM-guided Seed Directive Generation methodology. This approach enables warm-starts that rapidly converge toward comprehensive and concave ...
https://arxiv.org/abs/2505.22086v1
exploration within a confined design space under restricted search budgets. The introduction of ML and DL methods has driven the construction of analysis models for QoR prediction [ 27,28,46,47]. By providing surrogate models for HLS tool evaluations, these methods enable performance and resource utilization prediction...
https://arxiv.org/abs/2505.22086v1
design space Φfor Pareto-optimal designs. Definition 2 (Pareto-Optimal Designs). The explored designs λ(φ∗)andλ(φa)if: Lat(H, λ(φ∗))≤Lat(H, λ(φa)), Util (H, λ(φ∗))≤Util(H, λ(φa)) (2) We call it λ(φ∗)dominates λ(φa). If no other φ∈Φdominates λ(φ∗), then λ(φ∗)is called a Pareto-optimal design. All such designs form the P...
https://arxiv.org/abs/2505.22086v1
QoR Analysis Pareto -Optimal DesignsUtilization LatencyDesign SpaceOptimization Trajectory Reflection 3 QoR Analysis × 1Feature -Driven Pruning2Seed Directive GenerationFigure 3: iDSE Workflow 4.1 Preprocessing: Directive Configuration Extraction and Design Space Pruning The allocation of loop and memory access paralle...
https://arxiv.org/abs/2505.22086v1
population size Pi, vendor HLS tool H, Output: Pareto-optimal designs λ(φ∗) 1: Initialize P0withφinit←WARM START (πθ,Φ, N0, λ(φ)) ▷Section 4.2 2: Evaluate quality of results Qof initial sampling HLS designs λ(φinit)usingH 3:fori←0 toImaxdo 4: Label population Piwith rank and crowding distance 5: λ(φelite)←SEL(πθ,Q, Pi)...
https://arxiv.org/abs/2505.22086v1
in exploring Pareto-optimal designs that satisfy diverse optimization preferences. We selected 12 HLS benchmarks with varying functionality and design space dimensions |Φ|from PolyBench [ 90], CHStone [ 91], and MachSuite [ 92]. Diverse 6 Table 1: Comparison of iDSE effectiveness over baseline DSE methods. HLS Design A...
https://arxiv.org/abs/2505.22086v1
16.6×over NSGA-II, 12.7 ×over ACO, 9.9 ×over MOEA/D, 15.3 ×over Lattice and 5.1 ×over HGBO-DSE. While Lattice achieved optimal performance in autocorr through local search traversal, its sensitivity to initial sampling designs and lack of global perspective limit its effectiveness in exploring trade-off curves in large...
https://arxiv.org/abs/2505.22086v1
a geometric mean speedup of 11.0 ×over meta-heuristic-based DSE methods and 4.4 ×over SOTA DSE methods. The notable improvement in performance and efficiency of iDSE stems from the dual effectiveness of initial high-quality sampling and intelligently guided searches. LLM accelerates con- vergence to Pareto-optimal desi...
https://arxiv.org/abs/2505.22086v1
further integrated bottleneck analysis and design refactoring operators, examining the enhancement effects of domain-specific hardware optimization knowledge on DSE. CHStone Overall0PolyBench MachSuite12345Relative ADRS Improv.Baseline S1 S2 S3 4.0×5.3× 3.6×4.4× Figure 6: Ablation of Adaptive Optimization.The geometric...
https://arxiv.org/abs/2505.22086v1
Thomas Bourgeat, Brian Plancher, and Vijay Janapa Reddi. RoboShape: Using topology patterns to scalably and flexibly deploy accelerators across robots. In Proceedings of the 50th Annual International Symposium on Computer Architecture (ISCA) , 2023. [8]Sabrina M. Neuman, Brian Plancher, Thomas Bourgeat, Thierry Tambe, ...
https://arxiv.org/abs/2505.22086v1
Ansaloni, and Laura Pozzi. Lattice-traversing design space explo- ration for high level synthesis. In 2018 IEEE 36th International Conference on Computer Design (ICCD) , pages 210–217, 2018. [22] Cody Hao Yu, Peng Wei, Max Grossman, Peng Zhang, Vivek Sarker, and Jason Cong. S2FA: An accelerator automation framework for...
https://arxiv.org/abs/2505.22086v1
errors with large language model. In Proceedings of the 61st ACM/IEEE Design Automation Conference (DAC) , 2024. [36] Yonggan Fu, Yongan Zhang, Zhongzhi Yu, Sixu Li, Zhifan Ye, Chaojian Li, Cheng Wan, and Yingyan Celine Lin. GPT4AIGChip: Towards next-generation AI accelerator design automation via large language models...
https://arxiv.org/abs/2505.22086v1
Betterv: controlled verilog generation with discriminative guidance. In Proceedings of the 41st International Conference on Machine Learning (ICML) , 2024. [51] Bingkun Yao, Ning Wang, Jie Zhou, Xi Wang, Hong Gao, Zhe Jiang, and Nan Guan. Location is key: Leveraging large language model for functional bug localization ...
https://arxiv.org/abs/2505.22086v1
Kwong. Pareto multi-task learning. In Advances in Neural Information Processing Systems , volume 32, 2019. [67] Xingchao Liu, Xin Tong, and Qiang Liu. Profiling pareto front with multi-objective stein variational gradient descent. In Advances in Neural Information Processing Systems , volume 34, pages 14721–14733, 2021...
https://arxiv.org/abs/2505.22086v1
and quality diversity optimization. Inproceedings of the Genetic and Evolutionary Computation Conference , pages 1110–1118, 2024. [82] Haishuai Wang, Yang Gao, Xin Zheng, Peng Zhang, Hongyang Chen, Jiajun Bu, and Philip S Yu. Graph neural architecture search with gpt-4. arXiv preprint arXiv:2310.01436 , 2023. [83] Clin...
https://arxiv.org/abs/2505.22086v1
B Prompt Engineering 19 B.1 Prompt for Feature Extractor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 B.2 Prompt for Feature-Driven Pruning . . . . . . . . . . . . . . . . . . . . . . . . . . 19 B.3 Prompt for Seed Directive Generation . . . . . . . . . . . . . . . . . . . . . . . . 20 B.4 Prompt for Qo...
https://arxiv.org/abs/2505.22086v1
Example Structure Optimization Directive Config. Examples of Feature Vector LoopPIPELINE "off", "on"{‘name’: ‘mul’, ‘pipeline’: 1, ‘unroll’: 2}UNROLL integer Array ARRAY_PARTITION"complete", "block", "cyclic"{‘name’: ‘C’, ‘type’: 2, ‘dim’: 1, ‘factor’: 2}integer 1# Project Setup 2open_project vector_mul 3add_files vect...
https://arxiv.org/abs/2505.22086v1
set to default values (e.g., unroll factor 0) to prevent undefined tool behaviors. For array partitioning, Complete type partitioning automatically disables partition factors. A.3 Screening Mechanism for Designs with Optimization Potential During non-dominated sorting, we calculate domination counts for each design and...
https://arxiv.org/abs/2505.22086v1
the decomposition of arrays and loops structure in the C/C++ code. ∘ Arrays: · Extract the array name . · Identify the array dimensions . · Determine the size of each dimension. ∘ Loop s:· Extract the loop name . · Extract outer loop name for the nested loop. · Calculate the maximum number of iterations for the loop. •...
https://arxiv.org/abs/2505.22086v1
You will focus on optimizing three types of pragmas: array_partition, pipeline, and unroll. • Note: ∘ For nested loop , set innermost loop pipeline to 1, if inner loop trip count > 32, set outer loop pipeline to 0. ∘ Apply loop unroll pruning rules : · If loop has variable bound or non -exclusive inner loop body (imper...
https://arxiv.org/abs/2505.22086v1
}}}Figure 8: Prompt for Feature-Driven Pruning. B.3 Prompt for Seed Directive Generation Figure 9 details our hierarchical prompt architecture integrating task descriptions, sampling objectives, and functional descriptions of optimization directives. The sampling objectives are organized through three meticulously defi...
https://arxiv.org/abs/2505.22086v1
configuration file can be used. ∘ Performance Optimization: Achieve the best performance, which will result in higher hardware consumption. ∘ Resource Utilization Optimization: Minimize hardware resource usage, which may result in reduced performance. ∘ PPA Tradeoff Optimization: Find a balanced unroll factor combinati...
https://arxiv.org/abs/2505.22086v1
ranks: Favor lower -rank solutions even with lower crowding distances • Bottleneck Analysis Reasoning : ∘ Step 1: Loop Optimization Check . Check loop -carried dependencies blocking pipeline/unroll optimizations. ∘ Step 2: Memory Access Check . Verify array partitioning factors match unroll/pipeline parallelism. ∘ Step...
https://arxiv.org/abs/2505.22086v1
granularity of unroll with computationally bottlenecked loops. - Memory: Partition arrays related to loops (block for sequential, cyclic for parallel) with partition factor ≥ unroll factor. - When the bottleneck is computation, prioritize the application of fine -grained pipelines (pipeline on); - When the bottleneck i...
https://arxiv.org/abs/2505.22086v1
between parallel directives · Ensure partition factor does not exceed array dimension • Configuration Parameters Description :< Direcitive configuration ... > • Input: Elite Parent Configurations: {parent_config} ; HLS Design Code: {kernel_code} ; • Expect Output: Generate exactly {num_crossover} configurations in this...
https://arxiv.org/abs/2505.22086v1
zero. This evaluation aligns with practical DSE goals, where the primary focus is to approximate or exceed the reference front rather than explore regions beyond it. d= max 0,Lat(H, λ(φω))−Lat(H, λ(φγ)) Lat(H, λ(φγ)),Util(H, λ(φω))−Util(H, λ(φγ)) Util(H, λ(φγ)) (5) ADRS is suitable for HLS DSE due to the inherent irr...
https://arxiv.org/abs/2505.22086v1
trials=100, EHVI candidates=24, Prior weight=1.0 D Supplementary Results D.1 Determination of Initial Sample Sizes Our experimental observations reveal that increased initial sampling quantities generally enhance Pareto front coverage across benchmarks, as indicated by the progressive convergence toward refer- ence fro...
https://arxiv.org/abs/2505.22086v1
11 21 9 81 66 32.40× 14.34× iDSE 3 3 6 4 5 5 1 6 3 10 6 7 30.54× 25.07× Figure 14 illustrates the initial explored Pareto fronts obtained from different benchmarks under the previously determined number of initial samples. As shown, the Random Sampling (RS) ,U-shaped Beta Sampling (BS) , and Latin Hypercube Sampling (L...
https://arxiv.org/abs/2505.22086v1
0.7494 0.1521 autocorr 0.0883 0.0702 0.0931 0.7458 Avg 1 1.9097 × 1.7808× 4.6109× Geo Mean 1 1.6508 × 1.3656× 3.1709×as evidenced by the lack of significant differ- ences among the indicators in the last three columns of Table 10. Unlike evolutionary al- gorithms that directly inherit and recombine population genes thr...
https://arxiv.org/abs/2505.22086v1
exploration during subsequent refinement stages. The uniform random sampling (RS) strategy generates independent feature vectors φ0by selecting each parameter φj(forj= 1, ..., d dimensions) from a uniform distribution. Latin Hypercube Sampling (LHS) enforces stratified spatial coverage in d-dimensional space through or...
https://arxiv.org/abs/2505.22086v1
these configurations based on learned hardware optimization principles, preserving the flexibility and generalization of iDSE. /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000016/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011...
https://arxiv.org/abs/2505.22086v1
that iDSE significantly outperforms heuristic- based DSE methods, discovering Pareto fronts that, although potentially less diverse, are sufficiently impressive to satisfy multifaceted optimization preferences. Relaxing the maximum iteration limit would allow convergence toward a wider spectrum of Pareto-optimal design...
https://arxiv.org/abs/2505.22086v1
arXiv:2505.22087v1 [cs.AI] 28 May 2025Cognitively-Inspired Emergent Communication via Knowledge Graphs for Assisting the Visually Impaired Ruxiao Chen1, Dezheng Han2, Wenjie Han2, Shuaishuai Guo2 1Johns Hopkins University2Shandong University rchen117@jh.edu, shuaishuai_guo@sdu.edu.cn Abstract Assistive systems for visu...
https://arxiv.org/abs/2505.22087v1
inputs or high-dimensional embeddings, fail- ing to leverage structured, cognitively informed representations (Mu and Goodman, 2021). This limits their semantic alignment with how humans process visual scenes—by segmenting them into entities and reasoning over inter-object relation- ships (Biederman, 1987; Teney et al....
https://arxiv.org/abs/2505.22087v1
work extended EC to richer modalities, incorporating raw images (Dessì et al., 2021), pixel- based features (Nikolaus, 2024), and multimodal embeddings (Lee, 2024). Recent efforts have sought to improve the com- positionality, interpretability, and generalization of EC protocols beyond mere architectural refine- ments....
https://arxiv.org/abs/2505.22087v1
cognitive structures makes them particularly suitable for supporting symbolic reasoning and language emergence (Con- klin and Smith, 2023). In this work, we extend this line of inquiry by integrating knowledge graphs as structured inputs for EC, enabling agents to reason over entities and their relationships in a cogni...
https://arxiv.org/abs/2505.22087v1
as: Hl+1=σ(ˆD−1 2ˆAˆD−1 2HlWl), (1) where ˆA=A+Irepresents the adjacency matrixwith added self-loops, ˆDis its degree matrix, Wl is a learnable weight matrix, and σ(·)denotes a non-linear activation function. To emulate the human ability to selectively attend to task-relevant stimuli, we integrate an attention mechanis...
https://arxiv.org/abs/2505.22087v1
emergent communication with these cognitive principles, we construct a visual cogni- tive graph for each image. Given an input image Ii, we extract a knowledge graph ci= (Vi, Ei), where Videnotes object nodes (obtained via SAM segmentation) and Eiencodes spatial or functional relationships between them. These graphs se...
https://arxiv.org/abs/2505.22087v1
where τ > 0is the temperature parameter that controls the smoothness of the distribution. As τ→0, the soft sample ˜mℓapproaches a one-hot vector; as τincreases, the distribution becomes smoother. We set τ= 1 in our experiments to balance stability and approximation quality. During training, the soft messages ˜mℓare use...
https://arxiv.org/abs/2505.22087v1
8 9 Token ID050010001500200025003000Frequency(c) Token Frequency Distribution Original EC VAGEC vocab size=10 vocab size=20 vocab size=80 Vocabulary Size0.00.20.40.60.81.01.2Value0.5650.6810.7220.7020.798 0.676 0.4650.683 0.6200.6520.839 0.734 0.1920.309 0.310 0.3090.3800.401(d) Performance Comparison: Original EC vs V...
https://arxiv.org/abs/2505.22087v1
usage across mid-rank tokens, producing a more compact yet expressive code. The cumulative coverage plot quantifies lexical diversity. The baseline achieves 90% coverage with just 4 tokens, whereas V AG-EC requires 6–7—re- flecting broader symbol usage. Because each token in V AG-EC maps to a structured semantic elemen...
https://arxiv.org/abs/2505.22087v1
a cognitive map? organizing knowledge for flexible behavior. Neuron , 100(2):490–509. Irving Biederman. 1987. Recognition-by-components: A theory of human image understanding. Psycholog- ical Review , 94(2):115–147. Brendon Boldt and David Mortensen. 2024. A review of the applications of deep learning-based emergent co...
https://arxiv.org/abs/2505.22087v1
Hermann, Karl Tuyls, and Stephen Clark. 2018. Emergence of linguistic communication from referential games with sym- bolic and pixel input. In International Conference on Learning Representations . Heeyoung Lee. 2024. One-to-many communication and compositionality in emergent communication. InProceedings of the 2024 Co...
https://arxiv.org/abs/2505.22087v1
Systems , volume 35, pages 1389–1404. Curran Associates, Inc.Mathieu Rita, Corentin Tallec, Paul Michel, Jean- Bastien Grill, Olivier Pietquin, Emmanuel Dupoux, and Florian Strub. 2022b. Emergent communication: Generalization and overfitting in lewis games. In Advances in Neural Information Processing Systems . Damien ...
https://arxiv.org/abs/2505.22087v1
generated dining scenario from the synthetic dataset. Figure 5: Example of a real-world dining scenario used for testing. B Evaluation Metrics Explanation B.1 Context Independence Context Independence (CI) measures the extent to which the generated messages remain consistent and generalizable across different environme...
https://arxiv.org/abs/2505.22087v1
arXiv:2505.22092v1 [cs.AI] 28 May 2025VIRAL: V ISION -GROUNDED INTEGRATION FOR REWARD DESIGN ANDLEARNING Valentin Cuzin-Rambaud, Emilien Komlenovic, Alexandre Faure, Bruno Yun Université Claude Bernard Lyon 1, France CNRS, Ecole Centrale de Lyon, INSA Lyon, Université Lumière Lyon 2, LIRIS, UMR5205, France {valentin.cu...
https://arxiv.org/abs/2505.22092v1
generation. Finally, instead of relying on direct access to the environment’s code (as in EUREKA [ 10]) or structured abstraction (through Pydantic class definitions as in Text2Reward [ 12]), VIRAL describes environments solely through its observations, following the Gymnasium [ 18] documentation. This simplifies imple...
https://arxiv.org/abs/2505.22092v1
specify steps to help the latter in producing good quality code. Among the strategies employed to enhance zero-shot generation, one particularly effective method is step-back prompting [ 20], which allows to obtain a broader perspective by first reasoning about a problem at a higher-level before generating a detailed r...
https://arxiv.org/abs/2505.22092v1
simple yet challenging scenarios. Next, we selected the Highway environment, where a vehicle must navigate a multi-lane road, avoiding collisions and optimizing its speed while adhering to driving rules. Given the increasing relevance of autonomous vehicles, we considered it essential to include this environment in our...
https://arxiv.org/abs/2505.22092v1
across environments. However, as VIRAL’s main goal is to find the best reward, we investigated which modality led to very good behavior. Table 2 shows that, with the image modality, we discover behaviors that align more than with other modalities on Swimmer and Highway. These observations led us to the conclusion that ...
https://arxiv.org/abs/2505.22092v1
Garnett, editors, Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada , pages 8022–8034, 2018. [4]Kimin Lee, Laura M. Smith, and Pieter Abbeel. PEBBLE: feedback-efficient interactive reinforcement learn...
https://arxiv.org/abs/2505.22092v1
Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. [15] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. Adva...
https://arxiv.org/abs/2505.22092v1
arXiv:2505.22093v1 [cs.CY] 28 May 2025From Coders to Critics: Empowering Students through Peer Assessment in the Age of AI Copilots Santiago Berrezueta-Guzman Technical University of Munich Heilbronn, Germany s.berrezueta@tum.deStephan Krusche Technical University of Munich Munich, Germany krusche@tum.deStefan Wagner T...
https://arxiv.org/abs/2505.22093v1
educators fear that over-reliance on AI sug- gestions will short-circuit the mastery of fundamental pro- gramming skills and collaborative work in software projects [8]. A recent survey applied to computer science instructors found that while all were immediately concerned about AI- assisted cheating, they diverged on ...
https://arxiv.org/abs/2505.22093v1
be very low (under 20% of the class) without proper incentives or guidance, and the feedback provided tends to be perfunctory. For instance, reviewers may leave only brief comments (e.g., one-line re- marks), and some of these comments can be partially incorrect or superficial. Reviewers might rush through the task or ...
https://arxiv.org/abs/2505.22093v1
Total grade 100 % The project centers on creating a classic-style 2D game inspired by the concept of Maze Runner . The gameplay revolves around steering a character through a complex maze. The main goal is to progress from the starting point to the exit while dealing with hidden traps, hostile entities, and a locked ga...
https://arxiv.org/abs/2505.22093v1
previous related work because it assessed the functional and technical quality of each team’s program- ming project, not only the code. A. Assessment process Gameplay assessment Each assessor team must clone and play the game developed by their assigned peers to complete the evaluation rubric detailed in Table II. In a...
https://arxiv.org/abs/2505.22093v1
feedback aimed to provide insights into the perceived fairness and acceptability of peer grading and the students’ level of engagement in the process. V. R ESULTS A. Reliability analysis results Table IV shows that Peer Review 1 had a moderate positive correlation with instructor grades ( r= 0.55), a MAE of 9.18, and a...
https://arxiv.org/abs/2505.22093v1
shows less dispersed points to the right of the Perfect Match Line than to the left. Fairness in Peer Assessment. All 47 teams (100%) be- lieved they provided fair evaluations. Many cited the consistent use of the grading rubric, collaborative discussion within the team, and an effort to remain unbiased despite challen...
https://arxiv.org/abs/2505.22093v1
that structured peer review, supported by detailed rubrics and anonymized processes, can be a reasonably accurate proxy for instructor evaluation. Additionally, we anticipate that a good peer assessment involves the evaluators discussing the rubrics and understanding the project’s components before providing feedback a...
https://arxiv.org/abs/2505.22093v1
can reliably evaluate each other’s work using detailed rubrics. The align- ment between peer and instructor evaluations and positive student perceptions of fairness and engagement underscores the pedagogical potential of peer review in programming education. Importantly, peer assessment also promotes key competencies t...
https://arxiv.org/abs/2505.22093v1
Cotton, and J. R. Shipway, “Chatting and cheating: Ensuring academic integrity in the era of chatgpt,” Innovations in education and teaching international , vol. 61, no. 2, pp. 228–239, 2024. [10] S. Bradley, “Addressing bias to improve reliability in peer review of programming coursework,” in Proceedings of the 19th K...
https://arxiv.org/abs/2505.22093v1
arXiv:2505.22096v1 [cs.CL] 28 May 2025Knowledge Base Construction for Knowledge-Augmented Text-to-SQL Jinheon Baek1*Horst Samulowitz2Oktie Hassanzadeh2 Dharmashankar Subramanian2Sola Shirai2Alfio Gliozzo2Debarun Bhattacharjya2 KAIST1IBM Research2 jinheon.baek@kaist.ac.kr {samulowitz, hassanzadeh}@us.ibm.com dharmash@us...
https://arxiv.org/abs/2505.22096v1
proposed collecting and annotating explicit knowledge, which is then leveraged for SQL generation (Dou et al., 2022; Li et al., 2023). However, while these approaches substantially improve the performance of existing text-to-SQL models, they rely on extensive human annotations, which may be suboptimal (and nearly impra...
https://arxiv.org/abs/2505.22096v1
can be directly reused or provide insights for multi- ple queries within the same database, as shown in Figure 1 (Right). Also, this knowledge can be gen- eralizable to other queries for different databases. Motivated by these observations, this work pro- poses an automatic approach to build a knowledge base, designed ...
https://arxiv.org/abs/2505.22096v1
queries, which oftentimes requires grounding in the database schemas or addi- tional domain-specific information for specialized domains, which gives rise to the need for leverag- ing external knowledge for text-to-SQL. Knowledge-Augmented Text-to-SQL There are a few recent studies that propose augmenting text- to-SQL ...
https://arxiv.org/abs/2505.22096v1
tables and columns. Then, the SQL generation model fcan be represented as fol- lows:s=f(q,D)where sis the SQL statement (consisting of a sequence of tokens) that attempts to retrieve the information requested by qoverD. In this work, we operationalize fwith LLMs, to harness their strong capability in understanding the ...
https://arxiv.org/abs/2505.22096v1
base Kas a collection of knowledge entries, each represented as a concise sentence, denoted as follows: k∈ K. For instance, in the medical domain, one knowl- edge entry might be “Abnormal white blood cell count refers to WBC ≤3.5 or WBC ≥9.0”, which describes the abnormal range of white blood cell counts and its corres...
https://arxiv.org/abs/2505.22096v1
knowledge base K. Hereafter, the next question to answer is then Algorithm 1 Knowledge-Augmented Text-to-SQL Require: Dataset Dcontaining query-schema pairs (q,D); LLM model LLM; Prompt templates T Ensure: SQL statement sfor a given query q 1:Phase 1: Knowledge Base Construction 2:K ← {} ∪ D ▷Initialize knowledge base ...
https://arxiv.org/abs/2505.22096v1
prior for formulat- ing SQL statements. Spider is another benchmark dataset, built upon 200 databases across 138 do- mains. Unlike BIRD, samples in Spider do not have annotated knowledge for text-to-SQL. Lastly, we consider a challenging real-world text-to-SQL data, namely CSTINSIGHT, which is designed with ac- tual cu...
https://arxiv.org/abs/2505.22096v1
which uses oracle knowledge anno- tated by humans, along with the queries to generate the SQL statements. This approach serves as an upper bound and is not directly comparable to other models due to its reliance on accurate, manually curated knowledge that is typically unavailable. 4.3 Evaluation Metrics Following the ...
https://arxiv.org/abs/2505.22096v1
steps during knowledge base construction. For eval- uation, we use two metrics: Exact Match, which identifies whether the knowledge base contains an entry that precisely matches the knowledge re- quired for a given query, and Semantic Similarity, which assesses how closely related the most simi- lar entry (in the knowl...
https://arxiv.org/abs/2505.22096v1
substantial improvements when we incorporate the retrieved knowledge from our knowledge base into the text-to-SQL generation process. Furthermore, instead of directly using the retrieved knowledge, refining this retrieved knowl- edge yields additional improvements, underscor- ing the importance of not only retrieving r...
https://arxiv.org/abs/2505.22096v1
with the state-of- the-art text-to-SQL model on the BIRD leaderboard. Models EX ChatGPT 24.05 ChatGPT + CoT 25.88 ExSL + granite-20b-code 51.69 ExSL + granite-20b-code w/ KAT-SQL (Ours) 57.56 ExSL + granite-20b-code w/ Oracle Knowledge 65.38 performs all baselines regardless of the choice of LLMs, which demonstrates th...
https://arxiv.org/abs/2505.22096v1
than 3.5 or greater than 5.5 2) glucose is within normal range refers to GLU between 70 and 100 mg/dL 3) Hemoglobin (Hb) is considered normal for males if levels range from 13.5 to 17.5 g/dL Example 2Eligible free rate for K-12 = Free Meal Count (K-12) / En- rollment (K-12)1) Eligible reduced-price rate for K-12 = Redu...
https://arxiv.org/abs/2505.22096v1
October 2022 , volume 3274 of CEUR Workshop Pro- ceedings , pages 11–34. CEUR-WS.org. Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean- Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M. Dai, Anja Hauth, Katie Mil- lican, David Silver, Slav Petrov, Melvin Johnson, Ioannis Antonoglou, Julian Schrittwi...
https://arxiv.org/abs/2505.22096v1
and Xiao Huang. 2024. Knowledge- to-sql: Enhancing SQL generation with data expert LLM. arXiv preprint arXiv:2402.11517 . Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bam- ford, Devendra Singh Chaplot, Diego de Las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, G...
https://arxiv.org/abs/2505.22096v1
In Advances in Neural Information Processing Systems 36: Annual Confer- ence on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023 .Nitarshan Rajkumar, Raymond Li, and Dzmitry Bah- danau. 2022. Evaluating the text-to-sql capabil- ities of large language models. arXiv ...
https://arxiv.org/abs/2505.22096v1
Isil Dillig, and Thomas Dillig. 2017. Sqlizer: query synthesis from natural language. Proc. ACM Program. Lang. , 1(OOPSLA):63:1–63:26. Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, Zilin Zhang, and Dragomir R. Radev. 2018. Spider: A large-scale...
https://arxiv.org/abs/2505.22096v1
{Database Schema} Question: {Few-Shot Question 1} Evidence: {Few-Shot Evidence 1} Question: {Few-Shot Question 2} Evidence: {Few-Shot Evidence 2} ... Question: {Few-Shot Question 10} Evidence: {Few-Shot Evidence 10} Question: {Target Question} Evidence: SQL GenerationDB Schema: {Database Schema} Question: {Few-Shot Que...
https://arxiv.org/abs/2505.22096v1
arXiv:2505.22104v1 [cs.AI] 28 May 2025Efficient Dynamic Shielding for Parametric Safety Specifications Davide Corsi1, Kaushik Mallik2, Andoni Rodríguez2, and César Sánchez2 1University of California, Irvine, USA dcorsi@uci.edu 2IMDEA Software Institute, Spain {kaushik.mallik,andoni.rodriguez,cesar.sanchez}@imdea.org Ab...
https://arxiv.org/abs/2505.22104v1
whose map is unknown apriori. The workspace is filled with static obstacles, and the robot must avoid colliding with them at all time. However, the visibility of the robot is limited by the range of its sensors, and therefore it can see the obstacles only when it gets close to them. If the entire map were visible to th...
https://arxiv.org/abs/2505.22104v1
a lightweight online adaptation proce- dure that is significantly cheaper than the pure online algorithm, which would instead compute a new shield for Φfrom scratch. We propose abstraction-based synthesis algorithms for our dynamic shields, though other alternatives could also be pursued [28]. Concretely, we first crea...
https://arxiv.org/abs/2505.22104v1