text string | source string |
|---|---|
regions receive lower scores in subsequent iterations, the initial performance penalty demonstrates that excessive sampling can be counterproductive. These findings showcase the importance of balanced exploitation within selected regions: sufficient sampling to capitalize on promising areas without overcommitting compu... | https://arxiv.org/abs/2505.21372v1 |
Figure 10: Performance comparison of exploration parameter settings ( αmax∈ {0.2,0.5,0.7,1.0}) across the four FCNet benchmark tasks. Each curve represents the mean objective function value over 5 independent runs, with shaded regions denoting standard error. The results illustrate that optimal αmaxselection exhibits t... | https://arxiv.org/abs/2505.21372v1 |
previously evaluated configurations and their performance metrics. These are the in-context learning examples.{ {lr: 0,4, activation: "relu", num_layer: 8,...} F1: 5.65 {lr: 0.03, activation: "tanh", num_layer: 8,...} F1: 3.23 ... } $target_number_of_can didatesThe number of new configurations that the candidate sample... | https://arxiv.org/abs/2505.21372v1 |
are : $Region_ICL_examples ## Reference configurations with Known Performance Below are examples of configurations that have been evaluated , showing their operations and performance metrics : ## Candidate configurations to Evaluate You must predict performance for these new configurations : $target_architectures ## Yo... | https://arxiv.org/abs/2505.21372v1 |
arXiv:2505.21388v1 [cs.SI] 27 May 2025DeSocial: Blockchain-based Decentralized Social Networks Jingyuan Huang Rutgers UniversityXi Zhu Rutgers UniversityMinghao Guo Rutgers UniversityYongfeng Zhang∗ Rutgers University Abstract Web 2.0 social platforms are inherently centralized, with user data and algorithmic decisions... | https://arxiv.org/abs/2505.21388v1 |
users receive feeds via personalized algorithms. By contrast, blockchain as a Web 3.0 technology is a promising alternative [ 19,42,63,76,2]. With their emphasis on decentralization [ 40,19], transparency [ 93,29] and verifiable interactions [ 47,30], blockchain-based systems offer a new paradigm for social, transactio... | https://arxiv.org/abs/2505.21388v1 |
completely decentralized environment where each user node has full access to the local view of the 2 social network structure, but raw data cannot be shared across nodes [ 87,45,37,46]. Furthermore, the storage and computational capabilities of each node are limited, making it infeasible to deploy large-scale models su... | https://arxiv.org/abs/2505.21388v1 |
corresponds to the successful validation of a transaction and its subsequent broadcasting to all nodes. Definition 3.1 (Temporal Graph) .A temporal graph can be formally introduced as Gt= (Vt,Et) whereVt,Etdenotes a set of Nnodes and a set of directed edges at time t, respectively. Definition 3.2 (Node-Specific Backbon... | https://arxiv.org/abs/2505.21388v1 |
different inductive biases. •Consensus under variance : Final decisions must tolerate noise, diversity among validators. 4 Our Framework 4.1 Framework Overview Our framework DeSocial comprises two modules: (1) a personalized algorithm selection mechanism that allows users to select their algorithms for their prediction... | https://arxiv.org/abs/2505.21388v1 |
τ=0vn/∈ Nt(u)! } (5) with a size of γ. We select Futhrough Eq. 6, where Πu,vp= exp( α∗(t−te)), denoting the edge weights for algorithm selection. The edges that emerges later has greater weights. teis the emerge time of (u, v), andαis the adjust coefficient. By leveraging the local subgraph and blockchain, users can ch... | https://arxiv.org/abs/2505.21388v1 |
Fϕj locally and independently. Each ϕjpredicts the edges (pi, qi)and(pi, q′ i), q′ i∈Neg(ϕ, pi, qi, t), making a binary decision V ote(ϕj, pi, qi, t+ 1) through Eq. 2. Each ϕjthen sends its vote to the blockchain by the smart contract. Aggregation : The smart contract initiates a roll call procedure that collects the i... | https://arxiv.org/abs/2505.21388v1 |
decentralized settings. Compared to two simpler hybrid baselines, random selection and rule-based selection, DeSocial PAconsistently delivers higher performance across most datasets and evaluation metrics. Specifically, DeSocial PAoutperforms all centralized baselines in all three metrics for UCI and 7 Memo-Tx, and sho... | https://arxiv.org/abs/2505.21388v1 |
prediction accuracy. Gains converges as nincreases to 9. We further analyze how the performance gain from multiple node consensus over a single centralized backbone varies with the number of validators, from 3 to 11, as shown in Figure 5. As the committee size increases, the marginal improvement diminishes, and the gai... | https://arxiv.org/abs/2505.21388v1 |
details of the computation device. In a real ETH network deployment with multiple participating users, the system would better exploit parallelism and further improve the efficiency. 6 Conclusions We extended the prediction of social networks to a decentralized setting and implemented our DeSocial framework with the We... | https://arxiv.org/abs/2505.21388v1 |
NIPS’17, page 1025–1035, 2017. [14] Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, Yongdong Zhang, and Meng Wang. Lightgcn: Simplifying and powering graph convolution network for recommendation. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval , pages 639–6... | https://arxiv.org/abs/2505.21388v1 |
Research and Development in Information Retrieval , SIGIR ’23, page 1680–1689, New York, NY , USA, 2023. Association for Computing Machinery. [29] Zihao Li, Jianfeng Li, Zheyuan He, Xiapu Luo, Ting Wang, Xiaoze Ni, Wenwu Yang, Xi Chen, and Ting Chen. Demystifying defi mev activities in flashbots bundle. In Proceedings ... | https://arxiv.org/abs/2505.21388v1 |
Systems , 35:32928–32941, 2022. [45] Tao Qi, Fangzhao Wu, Chuhan Wu, Lingjuan Lyu, Tong Xu, Hao Liao, Zhongliang Yang, Yongfeng Huang, and Xing Xie. Fairvfl: A fair vertical federated learning framework with contrastive adversarial learning. Advances in neural information processing systems , 35:7852– 7865, 2022. [46] ... | https://arxiv.org/abs/2505.21388v1 |
Leskovec, and Pan Li. Inductive representation learning in temporal networks via causal anonymous walks. arXiv preprint arXiv:2101.05974 , 2021. [60] Wei Wei, Xubin Ren, Jiabin Tang, Qinyong Wang, Lixin Su, Suqi Cheng, Junfeng Wang, Dawei Yin, and Chao Huang. Llmrec: Large language models with graph augmentation for re... | https://arxiv.org/abs/2505.21388v1 |
Mei, Hang Gao, Juntao Tan, Zujie Liang, and Yongfeng Zhang. A-mem: Agentic memory for llm agents. arXiv preprint arXiv:2502.12110 , 2025. [75] Wujiang Xu, Qitian Wu, Runzhong Wang, Mingming Ha, Qiongxu Ma, Linxun Chen, Bing Han, and Junchi Yan. Rethinking cross-domain sequential recommendation under open-world assumpti... | https://arxiv.org/abs/2505.21388v1 |
blockchain-inspired collabo- ration for root cause analysis in micro-services architecture. arXiv preprint arXiv:2404.12135 , 2024. [90] Xin Zhang, Daochen Zha, and Qiaoyu Tan. E2gnn: Efficient graph neural network ensembles for semi-supervised classification. arXiv preprint arXiv:2405.03401 , 2024. [91] Yang Zhang, Fu... | https://arxiv.org/abs/2505.21388v1 |
. . . . . . . . . . . . . . . . . . . . . 8 5.5 Efficiency Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 6 Conclusions 9 A Details for the Experiments 18 A.1 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 A.2 Baselines . . . . . . . . . . . .... | https://arxiv.org/abs/2505.21388v1 |
. . . . . . . . . . . . . . 27 C.3 Consensus Analysis of Different Node Groups . . . . . . . . . . . . . . . . . . . . 28 D Broader Impacts 28 E Ethics Statement 29 17 A Details for the Experiments A.1 Datasets Table 4 shows the number of nodes and edges, network density and network types, in each dataset. For each dat... | https://arxiv.org/abs/2505.21388v1 |
framework to learn aggregation functions in sampled neighbors, supporting the generalization to unseen nodes, suitable for large and dynamic graphs, balancing performance and scalability. It computes faster by reducing the number of neighbors for aggregation. SGC [64]: The simplified graph convolution network removes n... | https://arxiv.org/abs/2505.21388v1 |
A.4 Framework Training Training Strategies . At each time step t, the model is trained by fully retrained strategy, takingSt−1 τ=0Gτas the training dataset. The model is validated on Gt, and tested on Gt+1, simulating an inductive setting where future edges must be predicted without directly observing them during train... | https://arxiv.org/abs/2505.21388v1 |
the high implementation and economical costs required for deployment to a public ETH network. A.7 Table of Notation Table 5: List of main notations used in this paper. Symbol Description Gt= (Vt,Et) Temporal graph at time t, with node set Vtand edge set Et F Pool of backbone models. FFull The set of backbones used in t... | https://arxiv.org/abs/2505.21388v1 |
the union of the historical snapshots G0,G1, ...,Gt, and each node stored one copy of Dt. •Step 4: Userpicreates a personalized neighborhood sampling task based on local graph structure. •Step 5: Validator nodes retrieve pi’s request through the blockchain smart contract, and evaluate it using different available algor... | https://arxiv.org/abs/2505.21388v1 |
all steps is reported as an amortized average of all users sending link prediction requests at t.Steps 4, 5, 6, 7 and 10 have lower latency due to operations on user-neighborhood structures, which scale with |Vt| ∼103. Although Step 10 yields calls to the smart contract via the blockchain with the number of time scalin... | https://arxiv.org/abs/2505.21388v1 |
introduces a small but non-negligible overhead. It is important to note that when a user selects a slower algorithm that yields better performance, the runtime of Step 3 should be determined by the slowest backbone in F, because each validator independently trains their own model in parallel, and the overall execution ... | https://arxiv.org/abs/2505.21388v1 |
and the standard deviations (%). Bold denotes the highest value per dataset at three evaluation metrics. All other parameters are fixed to their optimal values. Dataset α Acc@2 Acc@3 Acc@5 UCI0 72.84 ± 0.27 62.38 ± 0.32 51.00 ± 0.33 -0.01 73.02 ± 0.28 62.74 ± 0.32 51.37 ± 0.38 -0.1 73.35 ± 0.28 62.80 ± 0.30 51.34 ± 0.3... | https://arxiv.org/abs/2505.21388v1 |
0.04 500 95.53 ± 0.02 92.40 ± 0.02 87.52 ± 0.04 750 95.54 ± 0.02 92.39 ± 0.02 87.51 ± 0.04 1000 95.57 ± 0.02 92.44 ± 0.02 87.58 ± 0.04 1250 95.55 ± 0.02 92.42 ± 0.02 87.55 ± 0.04 Sensitivity to γ. Table 10 shows the performance given γ∈ {250,500,750,1000,1250}. In DeSocial ,γcontrols the size of neighbor sample set to ... | https://arxiv.org/abs/2505.21388v1 |
conflicts between backbone algorithms. Whether to adaptively choose combinations based on graph structure is the next question to be investigated. Table 11: Performance comparison over different F. Acc@2 (%) is reported with mean and standard deviation. "Y" indicates improvement over all centralized models in F, while ... | https://arxiv.org/abs/2505.21388v1 |
single model evaluated over test periods 30 to 39. Accuracy Threshold and Voting Effectiveness . As shown in Figure 9 and Figure 11, in general, when the centralized accuracy is above 0.5, decentralized consensus almost always improves or maintains the performance. However, when the centralized model falls below 0.5, t... | https://arxiv.org/abs/2505.21388v1 |
5/5 agreement (i.e., of all the agreements in this quartile, how many of them have all the validators given true votes) at Acc@2 on each quartile. Q1 nodes exhibit the lowest proportion of 5/5 agreement due to insufficient neighborhood information. Q2 and Q3 nodes gain more neighbors, enabling more reliable feature agg... | https://arxiv.org/abs/2505.21388v1 |
arXiv:2505.21391v1 [cs.LG] 27 May 2025Finite Sample Analysis of Linear Temporal Difference Learning with Arbitrary Features Zixuan Xie∗ University of Virginia xie.zixuan@email.virginia.eduXinyu Liu∗ University of Virginia xinyuliu@virginia.edu Rohan Chandra University of Virginia rohanchandra@virginia.eduShangtong Zhan... | https://arxiv.org/abs/2505.21391v1 |
Independent i.i.d. ✓ Srikant and Ying [2019] γ <1 Independent Markovian ✓ Chen et al. [2023b] γ <1 Independent i.i.d. ✓ Wang and Zhang [2024] γ <1 Arbitrary Markovian Mitra [2025] γ <1 Independent Markovian ✓ Theorem 1 γ <1 Arbitrary Markovian ✓ Tsitsiklis and Roy [1999] γ= 1 Independent Markovian Zhang et al. [2021c] ... | https://arxiv.org/abs/2505.21391v1 |
→R(to be defined shortly). We consider a feature mapping x:S →Rdand a weight vector w∈Rd. We then approximate vπ(s)withx(s)⊤w. We use X∈R|S|×dto denote 2 the feature matrix, where the s-th row of Xisx(s)⊤. The approximated state-value function across all states can then be represented as the vector Xw∈R|S|. The goal is... | https://arxiv.org/abs/2505.21391v1 |
always nonempty or not, much less the behavior of {wt}. 3 3 Main Results We start with our assumptions. As promised, we do not make any assumption on X. Assumption 3.1. The Markov chain associated with Pπis irreducible and aperiodic. Assumption LR. The learning rates are αt=α (t+t0)ξandβt=cβαt, where ξ∈(0.5,1],α >0, t0... | https://arxiv.org/abs/2505.21391v1 |
key to prove thatW∗is nonempty (Lemma 15). Third, this decomposition is the key to characterize W∗in that W∗={w∗}+ ker( X1)withw∗being any vector in W∗(Lemma 16). To better understand this characterization, we note that ker(X1) ={w|Xw=c1, c∈R}(Lemma 16). As a result, adding 2This theorem only discusses the case of λ= 0... | https://arxiv.org/abs/2505.21391v1 |
the chain and we additionally include Xw in the bound for the same reason. We now present our general results regarding the convergence rate of (SA) to W∗. 5 Theorem 3. Let Assumptions A1-A5and LRhold. Denote κ.=αC A4, then there exist some constants t0andCThm3, such that the iterates {wt}generated by (SA) satisfy for ... | https://arxiv.org/abs/2505.21391v1 |
convergence to a set, most prior works require the set to be bounded [Kushner and Yin, 2003, Borkar, 2009, Liu et al., 2025a]. Only a few prior works allow stochastic approximation to converge to an unbounded set, see, e.g., Bravo and Cominetti [2022], Chen [2025], Blaser and Zhang [2025], which apply to only tabular R... | https://arxiv.org/abs/2505.21391v1 |
Section D.3. We then have ⟨wt−Γ(wt), A(wt−Γ(wt))⟩ ≤ − C7∥wt−Γ(wt)∥2, which satisfies Assumption A4. For Assumption A5,(6)verifies Assumption A5(1). Assumption A5(2) is verified by the following lemma. 7 Lemma 8. There exists a constant C8such that for ∀w, y,∥H(w, y)∥ ≤C8(∥Xw∥+ 1) . The proof is in Section D.4. Assumpti... | https://arxiv.org/abs/2505.21391v1 |
d(wt,W*)=0.1 =0.005 =0.01 0.00.51.01.5 Steps×106510=0.5 =0.005 =0.01 0.00.51.01.5 Steps×1061020=0.9 =0.005 =0.01 Figure 1: Convergence of (Discounted TD) withγ= 0.9, α∈ {0.005,0.01}. Curves are averaged over 10 runs with shaded regions (too small to be visible) indicating standard errors. 0.00.51.01.5 Steps×1061.01.21.... | https://arxiv.org/abs/2505.21391v1 |
bounds. ArXiv Preprint, 2022. Qi Cai, Zhuoran Yang, Jason D Lee, and Zhaoran Wang. Neural temporal-difference and q-learning provably converge to global optima. ArXiv Preprint, 2019. Xuyang Chen, Jingliang Duan, Yingbin Liang, and Lin Zhao. Global convergence of two-timescale actor-critic for solving linear quadratic r... | https://arxiv.org/abs/2505.21391v1 |
& Business Media, 2006. Ul Haque Shaan and Theja Maguluri Siva. Stochastic approximation with unbounded markovian noise: A general-purpose theorem. ArXiv Preprint, 2024. Maity Sreejeet and Mitra Aritra. Adversarially-robust td learning with markovian data: Finite-time rates and fundamental limits. ArXiv Preprint, 2025.... | https://arxiv.org/abs/2505.21391v1 |
off-policy policy evaluation with function approximation. In Proceedings oftheInternational Conference onMachine Learning , 2021a. Shangtong Zhang, Hengshuai Yao, and Shimon Whiteson. Breaking the deadly triad with a target network. In Proceedings oftheInternational Conference onMachine Learning, 2021b. Shangtong Zhang... | https://arxiv.org/abs/2505.21391v1 |
Z1is now the first mcolumns of X. Apparently, we still have 1/∈col(X1). Lemma 14. Let Assumption 3.1 hold. Then A=X1Dπ(Pλ−I)X1,b=X⊤ 1Dπ(rλ−Jπ 1−λ1). Proof. Apply the decomposition shown in Lemma 1, we can get A=(X1+X2)⊤Dπ(Pλ−I)(X1+X2) 13 =X⊤ 1Dπ(Pλ−I)X1+X⊤ 2Dπ(Pλ−I)X1+X⊤ 1Dπ(Pλ−I)X2+X⊤ 2Dπ(Pλ−I)X2 =X⊤ 1Dπ(Pλ−I)X1, wher... | https://arxiv.org/abs/2505.21391v1 |
proof. C.1 Proof of Lemma 2 Proof. ∥Xw∥=∥Xw−XΓ(w) +XΓ(w)∥ ≤ ∥X(w−Γ(w))∥+∥XΓ(w)∥ ≤ ∥X∥∥w−Γ(w)∥+CA5 (Assumption A5) 15 C.2 Proof of Lemma 3 Proof. According to the definition of H(wt, Yt)in (10), ∥H(wt, Yt)∥2 ≤C2 A5(∥Xwt∥+ 1)2(By Assumption A5) ≤2C2 A5(∥Xwt∥2+ 1) ≤2C2 A5(C2 2(∥wt−Γ(wt)∥+ 1)2+ 1) ≤2C2 A5(2C2 2(∥wt−Γ(wt)∥2... | https://arxiv.org/abs/2505.21391v1 |
(t+t0)ξ. For E1, sett0> κ=CA4α, we have tY i=t(1−CA4αi)E[L(wt)] =tY i=t 1−CA4α (i+t0)ξ E[L(wt)] ≤tY i=t 1−κ i+t0 E[L(wt)] =E[L(wt)]tY i=ti+t0−κ i+t0 ≤E[L(wt)]t+t0 t+t0−κ⌊κ⌋ . ForE2, we have E2=tX j=t tY i=j+1i+t0−κ i+t0 ln(j+t0) (j+t0)2ξ =t−⌊κ⌋X j=t tY i=j+1i+t0−κ i+t0 ln(j+t0) (j+t0)2ξ+tX j=t−⌊κ⌋+1 tY ... | https://arxiv.org/abs/2505.21391v1 |
get Xw=0, further implying Aw=0, which is a contradiction. We have now proved that w⊤Aw̸= 0. We next prove that w⊤Aw < 0. This is from the fact that A is n.d., i.e., for ∀z∈Rd, z⊤Az≤0. But w⊤Aw̸= 0. So we must have w⊤Aw < 0. Finally, we use an extreme theorem argument to complete the proof. Define Z.= w|w∈ker(A)⊥,∥w∥=... | https://arxiv.org/abs/2505.21391v1 |
equality holds because z2∈ker(X⊥ 1). Thus, z⊤eAz=−cβz2 1−z⊤ 2qz1+z⊤ 2Bz2. We now characterize z⊤ 2Bz2. Apparently, z⊤ 2Bz2≤0always holds because Dπ(Pλ−I)is n.s.d. In view of (5), the equality holds only if X1z2=c1. But 1/∈col(X1)andz2∈ker(X1)⊥. So the equality holds only when z2= 0. Now we have proved that ∀z2∈ker(X1)⊥... | https://arxiv.org/abs/2505.21391v1 |
.40 1 .06 0.31 0 .47 0 .78 0 .62 1 .45 0.29 0 .45 0 .74 0 .58 1 .39 0.42 0 .64 1 .06 0 .84 1 .84 0.40 0 .62 1 .02 0 .80 1 .78 0.47 0 .73 1 .20 0 .94 2 .39 0.53 0 .81 1 .34 1 .06 2 .23 0.58 0 .9 1 .48 1 .16 2 .78 0.60 0 .92 1 .52 1 .20 2 .84 0.67 1 .03 1 .70 1 .34 3 .45 Each experiment runs for 1... | https://arxiv.org/abs/2505.21391v1 |
arXiv:2505.21393v1 [cs.LG] 27 May 2025Leveraging the Power of Conversations: Optimal Key Term Selection in Conversational Contextual Bandits Maoli Liu The Chinese University of Hong Kong Hong Kong, China mlliu@cse.cuhk.edu.hkZhuohua Li∗† Guangzhou Institute of Technology, Xidian University Guangzhou, Guangdong, China z... | https://arxiv.org/abs/2505.21393v1 |
challenge in recommender systems is the trade- off between exploration (i.e., recommending new items to uncover users’ unknown preferences) and exploitation (i.e., recommending items that align with users’ historical preferences). Contextual ban- dits [ 15] address this trade-off by enabling the system to learn from us... | https://arxiv.org/abs/2505.21393v1 |
of rounds. However, despite incorporating additional queries about key terms, the method does not yield substantial improvement over non-conversational approaches. Since then, improving regret through conversational interactions has remained an open problem in the field. Wang et al . [25] and Yang et al . [28] introduc... | https://arxiv.org/abs/2505.21393v1 |
allow- ing them to be applied independently or in combination. Therefore, we further propose the CLiSK-ME algorithm, which integrates both techniques to maximize exploration efficiency and adaptively ad- just user interactions. By leveraging the strengths of both methods, CLiSK-ME enhances exploration efficiency and op... | https://arxiv.org/abs/2505.21393v1 |
term 𝑘∈K corresponds to a context vector ˜𝒙𝑘∈R𝑑. At round𝑡, if a conversation is initiated, the agent selects a key term𝑘∈K, queries the user, and receives key-term level feedback ˜𝑟𝑘,𝑡. We follow the formulation of Wang et al . [25] that the user’s preference vector 𝜽∗remains consistent across both arms and ... | https://arxiv.org/abs/2505.21393v1 |
and 𝜌2controls the level of perturbations. Each dimension of 𝜺𝑘is truncated within [−𝑅,𝑅]for some𝑅>0, i.e.,|(𝜺𝑘)𝑗|≤𝑅,∀𝑗∈[𝑑]. 3.1.2 Key Term Selection. When initiating conversations, the agent no longer selects key terms directly based on their original contexts. Instead, the agent applies a small random per... | https://arxiv.org/abs/2505.21393v1 |
of the feature space. To facilitate exploration, we introduce the following assumption. Assumption 3. We assume that the elements in the key term set K are sufficiently rich and diverse, such that for any 𝒙∈R𝑑satisfying ∥𝒙∥2=1, there exists a key term 𝑘∈K such that|˜𝒙⊤ 𝑘𝒙|≥𝑐0, where𝑐0is some constant close to ... | https://arxiv.org/abs/2505.21393v1 |
𝑏(𝑡)= 𝑏𝑡for some𝑏∈(0,1). We start with Lemma 1, which bounds the difference between the estimated and true rewards for each arm. Lemma 1. Under Assumptions 1 and 2, for CLiSK, for any round 𝑡∈[𝑇]and any arm 𝑎∈A, with probability at least 1−𝛿for some 𝛿∈(0,1), we have 𝒙⊤ 𝑎𝜽𝑡−𝒙⊤ 𝑎𝜽∗ ≤𝛼𝑡∥𝒙𝑎∥𝑴−1 𝑡, wh... | https://arxiv.org/abs/2505.21393v1 |
covariance matrix eigenvalues. The condition 𝑡≥2𝑃is introduced to generalize all three checking functions. Building on this, we derive the following theorem for CLiME. Theorem 2 (Regret of CLiME) .With probability at least 1−𝛿for some𝛿∈(0,1), the regret upper bound of CLiME satisfies R(𝑇)≤4√︂ 2𝑇 𝛼© «vut 2 log(1... | https://arxiv.org/abs/2505.21393v1 |
the fact that∥˜𝒙𝑘∥=1and obtain Tr(𝑿)=E𝑘∈unif(B)h Tr ˜𝒙𝑘˜𝒙⊤ 𝑘i =1, thus𝜆B≤Í𝑑 𝑖=1𝜆𝑖 𝑑=Tr(𝑿) 𝑑=1 𝑑. Consequently, by plugging this result back into the regret expression, the regret bound of ConLinUCB-BS cannot be better than O(𝑑√︁ 𝑇log𝑇). These previ- ous attempts underscore the significance of our ... | https://arxiv.org/abs/2505.21393v1 |
for arms 𝒙𝑖and the preference vectors for users 𝜽𝑢. Following existing works, we decompose the feedback matrix 𝑹using truncated Sin- gular Value Decomposition (SVD) as 𝑹≈𝚯𝑺𝑨⊤, where 𝚯∈R𝑁×𝑑 and𝑨∈R|A|×𝑑contain the top- 𝑑left and right singular vectors, and𝑺∈R𝑑×𝑑is a diagonal matrix with the correspondin... | https://arxiv.org/abs/2505.21393v1 |
CLiSK-ME use the “Fixed Interval” function. KDD ’25, August 3–7, 2025, Toronto, ON, Canada Maoli Liu, Zhuohua Li, Xiangxiang Dai, and John C.S. Lui 0 2000 4000 6000 Round (a) Synthetic dataset0.00.51.01.5Regret1e3 0 2000 4000 6000 Round (b) MovieLens dataset0.00.51.01.5Regret1e3 0 2000 4000 6000 Round (c) Yelp dataset0... | https://arxiv.org/abs/2505.21393v1 |
three algorithms show substan- tial improvements compared to ConUCB and exhibit performance comparable to the ConLinUCB family of algorithms. For CLiME and CLiSK-ME, while matrix operations and eigenvalue compu- tation introduce slight overhead, the algorithms remain efficient, particularly with interval and exponentia... | https://arxiv.org/abs/2505.21393v1 |
multi-agent LLM response identification with a fixed arm set. Wang et al . [25] and Yang et al . [28] investigate the key term selection strategies and propose the ConLinUCB-BS and ConDuel algorithms, respectively. Both algo- rithms uniformly select key terms from the barycentric spanner of the key term set. The smooth... | https://arxiv.org/abs/2505.21393v1 |
and Data Engineering 36, 12 (2024), 8939–8953. [9]Xiangxiang Dai, Yuejin Xie, Maoli Liu, Xuchuang Wang, Zhuohua Li, Huanyu Wang, and John C. S. Lui. 2025. Multi-Agent Conversational Online Learning for Adaptive LLM Response Identification. arXiv:2501.01849 [cs.HC] [10] Yi Gai, Bhaskar Krishnamachari, and Rahul Jain. 20... | https://arxiv.org/abs/2505.21393v1 |
of the AAAI Conference on Artificial Intelligence , Vol. 37. 10288–10295. [26] Junda Wu, Canzhe Zhao, Tong Yu, Jingyang Li, and Shuai Li. 2021. Clustering of conversational bandits for user preference learning and elicitation. In Pro- ceedings of the 30th ACM International Conference on Information & Knowledge Manageme... | https://arxiv.org/abs/2505.21393v1 |
1100 Round (b) MovieLens dataset0.00.10.20.3k^µt¡µ¤k2 100 300 500 700 900 1100 Round (c) Yelp dataset0.000.050.100.150.20k^µt¡µ¤k2 100 300 500 700 900 1100 Round (d) Last.fm dataset0.00.10.20.3k^µt¡µ¤k2 CLiSK-ME CLiMECLiSK ConUCBLinUCB Arm-ConConLinUCB-MCR ConLinUCB-BS Figure 9: Comparison of estimation precision where... | https://arxiv.org/abs/2505.21393v1 |
align it with the 𝑥-axis, maintaining its length but zeroing out all components except the first component, i.e., 𝑸𝜽𝑡= (∥𝜽𝑡∥,0,0,..., 0). Note that such 𝑸always exists because it just rotates the space. According to CLiSK’s key term selection strategy ˜˜𝒙𝑘𝑡=arg max𝑘∈K𝜽⊤ 𝑡˜˜𝒙𝑘, we have 𝜆min Eh ˜˜𝒙𝑘𝑡˜... | https://arxiv.org/abs/2505.21393v1 |
is because 𝑒−𝑥is decreasing. Choosing 𝜀=1 2, we get Pr𝜆min© «𝑡∑︁ 𝑠=1∑︁ 𝑘∈K𝑠˜˜𝒙𝑘˜˜𝒙⊤ 𝑘ª® ¬≤𝜆K𝑏𝑡 2≤𝑑√ 2𝑒−1 2𝜆K𝑏𝑡 (1+√ 𝑑𝑅)2. Letting the RHS be 𝛿, we get𝑡=2(1+√ 𝑑𝑅)2log(𝑑 𝛿) 𝜆K𝑏(1−log(2))≤8(1+√ 𝑑𝑅)2 𝜆K𝑏log 𝑑 𝛿 . Therefore,𝜆minÍ𝑡 𝑠=1Í 𝑘∈K𝑠˜˜𝒙𝑘˜˜𝒙⊤ 𝑘 ≥𝜆K𝑏𝑡 2... | https://arxiv.org/abs/2505.21393v1 |
thenÍ𝑡 𝑠=1𝐾𝑠≤𝛼𝑑𝑡 𝑐2 0. For the “Fixed Interval Checking” function, at each uncertainty checking point 𝑡𝑗=𝑗𝑃where𝑗∈{1,2,...,⌊𝑇 𝑃⌋}, we have𝜆min(𝑴𝑡𝑗)≥ 𝛼𝑡𝑗. For the𝑗-th checking, there areÍ𝑑 𝑖=1𝛼𝑡𝑗−𝜆𝒗𝑖 𝑐2 0≤Í𝑑 𝑖=1𝛼𝑡𝑗−𝛼𝑡𝑗−1 𝑐2 0≤ 𝛼𝑑𝑃 𝑐2 0conversations to be launched. Thus, by ro... | https://arxiv.org/abs/2505.21393v1 |
’25, August 3–7, 2025, Toronto, ON, Canada ≤𝑇0+𝑇∑︁ 𝑡=𝑇0+12𝛼𝑡∥𝒙𝑎𝑡∥𝑴−1 𝑡(17) ≤𝑇0+2𝑇∑︁ 𝑡=𝑇0+1𝛼𝑡√︄ 2 𝜆K𝑏𝑡(18) ≤𝑇0+4𝛼𝑇√︄ 2𝑇 𝜆K𝑏(19) where Equation (17) is because the instantaneous regret reg𝑡≤1 by Assumption 1, Equation (18) follows from Lemma 4, and Equa- tion (19) is because 𝛼𝑡is non-decreasi... | https://arxiv.org/abs/2505.21393v1 |
of KL divergence 𝐷(𝑃∥𝑄)=∫ Ωlog d𝑃 d𝑄 d𝑃, 𝐷(P𝜽∥P𝜽′)=∫ ΩlogdP𝜽 dP𝜽′ dP𝜽=E𝜽 logdP𝜽 dP𝜽′ . Note that logdP𝜽 dP𝜽′(H𝑇) =log𝑝𝜽,𝜋(H𝑇) 𝑝𝜽′,𝜋(H𝑇)(23) =logÎ𝑇 𝑡=1𝜋arm 𝑡(𝑎𝑡|H𝑡−1)𝑝𝑎𝑡(𝑥𝑡)·𝜋key 𝑡(𝑘𝑡|H𝑡−1)e𝑝𝑘𝑡(e𝑥𝑡) Î𝑇 𝑡=1𝜋arm 𝑡(𝑎𝑡|H𝑡−1)𝑝′𝑎𝑡(𝑥𝑡)·𝜋key 𝑡(𝑘𝑡|H𝑡−1)e𝑝′... | https://arxiv.org/abs/2505.21393v1 |
dimensional instance 𝐼=(𝐼1,𝐼2,...,𝐼𝛽)by leting the unknown preference vector for instance 𝐼be𝜽=(𝜽⊤ 1,...,𝜽⊤ 𝛽)⊤, and dividing the time horizon 𝑇into𝛽consecutive periods, each of length 𝑇/𝛽. For each time step 𝑡∈ [𝑇], the feature vectors of arms A𝑡are constructed from instance 𝐼𝑗, where𝑗=⌈𝑡𝛽/𝑇⌉. S... | https://arxiv.org/abs/2505.21393v1 |
arXiv:2505.21396v1 [cs.CL] 27 May 2025Improving Research Idea Generation Through Data: An Empirical Investigation in Social Science Xiao Liu1, Xinyi Dong2, Xinyang Gao3, Yansong Feng1∗and Xun Pang4,5,6∗ 1Wangxuan Institute of Computer Technology2Yuanpei College3School of Government 4School of International Studies5Inst... | https://arxiv.org/abs/2505.21396v1 |
is prelim- inary and does not guarantee sound conclusions, it provides valuable signals regarding whether the ideas are likely to be effective . As shown in Figure 1, the standard framework for LLM ideation consists of three stages: liter- ature search, idea generation, and idea selection. Models first search related l... | https://arxiv.org/abs/2505.21396v1 |
(1) Textual data, which in- cludes documents such as national communications and high-level statements issued by various coun- tries, enabling both qualitative analysis and text mining. (2) Panel data, such as the Gross Domestic Product (GDP) of each country over time, facili- tating longitudinal analysis of trends ove... | https://arxiv.org/abs/2505.21396v1 |
Setup Research Topics We generate 10 climate negotiation-related research topics using GPT- 4o (Hurst et al., 2024), and manually verify them to ensure their quality. The created topics are in Appendix Table 8. Methods We experiment with three prevalent re- search idea generation methods: AI-Researcher (Si et al., 2024... | https://arxiv.org/abs/2505.21396v1 |
role in generating more empirically grounded and impactful ideas. However, novelty declines for AI- Researcher and Chain-of-Ideas when evaluated by Claude, indicating that data-aware generation may limit the generation of highly unconventional ideas. w. Metadata Tie w/o Metadata Significance 38.8 22.4 38.8 Novelty 42.6... | https://arxiv.org/abs/2505.21396v1 |
with GPT-4o using the code in- terpreter assistant, a built-in tool available in GPT models. It achieves superior performance in quanti- tative reasoning with data (Liu et al., 2024b), while more advanced methods can also be employed in the future. We input the hypotheses along with their corresponding data into the mo... | https://arxiv.org/abs/2505.21396v1 |
from all available datasets, is provided to an LLM. The model is prompted to determine whether the hypotheses can be tested using the provided datasets and to identify which datasets would be used for the validation. Specifically, the datasets are indexed numer- ically. If the model judges the hypotheses to be testable... | https://arxiv.org/abs/2505.21396v1 |
model are in bold. a slight decrease is observed in the judgment of significance and novelty. 4.4 Human Evaluation w. Validation Tie w/o Validation Significance 37.5 27.5 35.0 Novelty 45.0 21.7 33.3 Feasibility 40.0 33.3 26.7 Exp. Effectiveness 43.3 27.5 29.2 Overall 42.5 21.7 35.8 Table 4: Human comparison results of ... | https://arxiv.org/abs/2505.21396v1 |
randomly sample additional ideas from the other to complete the set of pairs. As shown in Table 5a, ideas proposed with refer- ences demonstrate higher overall quality. Specifi- cally, improvements are observed in novelty, feasi- bility, and expected effectiveness. Feedback from Participants To understand whether parti... | https://arxiv.org/abs/2505.21396v1 |
but distinct line of work focuses on hypothesis generation, where models generate hypotheses to explain phe- nomena given access to data, like inducing rules from observations (Zhong et al., 2023; Qiu et al., 2024). Studies in this area explore data-driven methods (Majumder et al., 2024a; Zhou et al., 2024) or integrat... | https://arxiv.org/abs/2505.21396v1 |
language models. arXiv preprint arXiv:2404.07738 .Benjamin E Bagozzi. 2015. The multifaceted nature of global climate change negotiations. The Review of International Organizations , 10:439–464. Daria Blinova, Rakesh Emuru, and Benjamin E Bagozzi. 2024. Individual attendance data for over 30 years of international clim... | https://arxiv.org/abs/2505.21396v1 |
. Bodhisattwa Prasad Majumder, Harshit Surana, Dhruv Agarwal, Sanchaita Hazra, Ashish Sabharwal, and Peter Clark. 2024a. Data-driven discovery with large generative models. arXiv preprint arXiv:2402.13610 . Bodhisattwa Prasad Majumder, Harshit Surana, Dhruv Agarwal, Bhavana Dalvi Mishra, Abhijeetsingh Meena, Aryan Prak... | https://arxiv.org/abs/2505.21396v1 |
with large language models. arXiv preprint arXiv:2404.04326 . A Data Collection Table 6 presents the full list of datasets in CLI- MATE DATABANK and corresponding data descrip- tions. Table 7 demonstrates the climate negotiation papers we collect for the automatic validation ex- periments. National communications, high... | https://arxiv.org/abs/2505.21396v1 |
during idea selection and automatic evaluation, as well as to human annotators for reference. E Implementation Details For the research idea generation methods, we ad- here to their original hyperparameters but modify the idea generation prompts to include instructions related to idea formats, and add the metadata. Sin... | https://arxiv.org/abs/2505.21396v1 |
data descriptions in C LIMATE DATABANK. ID Paper Title 1 The Multifaceted Nature of Global Climate Change Negotiations (Bagozzi, 2015) 2 A Closer Look at the Information Provision Rationale: Civil Society Participation in States’ Delegations at the UNFCCC (Böhmelt, 2013) 3 Sectors, Pollution, and Trade: How Industrial ... | https://arxiv.org/abs/2505.21396v1 |
selection, and automatic evaluation. Experiments are conducted on 8 NVIDIA A800 GPUs. F Experimental Results Table 10 presents the ELO scores for research idea generation methods with and without metadata, serving as the tabular counterpart to Figure 3. G Case Study Table 11 presents an example of ideas generated by GP... | https://arxiv.org/abs/2505.21396v1 |
accessed or acquired with reasonable effort? - Are funding, human resources, and technical support sufficient? 2. Timeline Appropriateness: - Can the study be realistically completed within one year? - Does the research have clear stages with achievable milestones? 3. Technical and Methodological Soundness: - Are the p... | https://arxiv.org/abs/2505.21396v1 |
climate commitments, leading to weaker compliance. Additionally, institutional mechanisms such as independent regulatory agencies, judicial oversight, and civil society engagement may enhance the credibility and accuracy of compliance monitoring. Hypotheses1. Countries with higher Democracy Index scores are more likely... | https://arxiv.org/abs/2505.21396v1 |
The relationship was not significantly stronger in democracies, so the hypothesis is not supported. 8. Hypothesis 2 is not supported, as the relationship between NSA participation and NDC ambition is not significantly stronger in democracies compared to autocracies. 9. Hypothesis 3 could not be tested due to the absenc... | https://arxiv.org/abs/2505.21396v1 |
ideas generated} Please write down your {number of ideas to generate} ideas. Output the ideas in json format as a dictionary, where the key is ’ideas’, and the value is a list of ideas. Each idea has keys ’Research Question’, ’Theory’, ’Hypotheses’, and ’Policy Implication’. The value of ’Hypotheses’ is a list of strin... | https://arxiv.org/abs/2505.21396v1 |
to the research question, and the hypotheses identify observable implications of the theory. 2. Output Requirements: - Format your response in JSON with the keys: ’Feasibility’, ’Validation Plan’, and ’Data Used’. - ’Feasibility’: This can take values from [’Yes’, ’No’]. It indicates whether the hypotheses can be valid... | https://arxiv.org/abs/2505.21396v1 |
any position biases and ensure that the order of the two ideas does not influence your decision. DO NOT allow the LENGTH of the ideas to influence your evaluation. Be as objective as possible. Here are the two research ideas for you to assess: Idea 1: {content of idea 1, containing the summarized validation process } I... | https://arxiv.org/abs/2505.21396v1 |
arXiv:2505.21398v1 [cs.AI] 27 May 2025A Structured Unplugged Approach for Foundational AI Literacy in Primary Education Maria Cristina Carrisi1[0000−0002−2837−3971], Mirko Marras1[0000−0003−1989−6057], and Sara Vergallo2[0009−0006−2129−5583] 1University of Cagliari, Cagliari, Italy {mariacri.carrisi,mirko.marras}@unica... | https://arxiv.org/abs/2505.21398v1 |
[25], showing the importance of integrating AI and CS into education from childhood. However, limited computational resources historically slowed down both AI de- velopment and its integration into curricula. In recent years, AI and robotics have renewed interest in early AI education, prompting organizations to estab-... | https://arxiv.org/abs/2505.21398v1 |
support students’ cognitive development in both AI and mathemat- ics. Third, we evaluate the effectiveness of the learning path through a study involving two primary school classes, analyzing both quantitative performance and qualitative feedback to assess conceptual gains and engagement. 2 The Proposed Learning Path I... | https://arxiv.org/abs/2505.21398v1 |
provides specific humidity values. The third system uses humidity and light sensors to determine when watering is necessary, follow- ing a structured set of predefined logical rules. This helps students understand howmultiple data sources enhance perception and how such systems rely on rule-based decision-making . Subs... | https://arxiv.org/abs/2505.21398v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.