text
string | source
string |
|---|---|
regions receive lower scores in subsequent iterations, the initial performance penalty demonstrates that excessive sampling can be counterproductive. These findings showcase the importance of balanced exploitation within selected regions: sufficient sampling to capitalize on promising areas without overcommitting computational budget to potentially sub-optimal regions. Impact of Number of Partitions Selected per Trial ( M).We examined how the number of partitions M∈ {1,3,5,7}selected per trial affects optimization performance. As seen in Figure 9, right , setting M= 1notably hinders performance, particularly during initial iterations, as HOLLM severely restricts exploration breadth by focusing all sampling efforts on a single region per iteration. When the initially chosen region lacks global promise, progress becomes slow, though the method can eventually exploit good regions once identified, leading to convergence that is typically slower than broader exploration strategies. Conversely, increasing Mto moderate or high values (3, 5, or 7) generally improves initial exploration by enabling simultaneous consideration of multiple diverse regions. This expansion of the candidate selection pool allows the algorithm to benefit from a larger, more diverse set of proposals per trial, improving performance up to a saturation point. The results demonstrate that balanced multi-region exploration through appropriate Mvalues provides superior performance compared to overly focused single-region strategies, highlighting the importance of maintaining exploration breadth while preserving the ability to exploit promising areas effectively. C.1.1 Impact of exploration parameter αmax We evaluated the effect of different exploration settings on the FCNet benchmarks across four tasks: PROTEIN ,NAVAL ,PARKINSONS , and SLICE . Results in Figure 10 show that the impact of the exploration parameter αmaxexhibits task-dependent variation, with optimal settings determined by the underlying problem structure. Higher values of αmaxbias the search toward less-explored regions, proving beneficial for highly multimodal or non-convex landscapes where diverse exploration is crucial for escaping local optima. Conversely, lower αmaxvalues reduce exploration of new regions and concentrate search efforts on exploitation, which may be more appropriate for smoother or convex solution spaces where intensive local search around promising areas yields better returns. These findings suggest that prior knowledge of the task landscape characteristics, such as modality, convexity, and noise structure, can effectively guide the selection of αmaxto match the exploration-exploitation balance to the problem’s inherent difficulty and structure. C.1.2 Effect of Hyperparameters on Computational Cost The computational complexity of our method scales directly with the total number of candidate points generated per iteration, calculated as k·M. When employing computationally expensive models 20 0 20 40 60 80 100 Number of evaluations0.00010 0.00009 0.00008 0.00007 0.00006 0.00005 0.00004 0.00003 0.00002 Function value Naval HOLLM (max=0.2) HOLLM (max=0.5) HOLLM (max=0.7) HOLLM (max=1.0) 0 20 40 60 80 100 Number of evaluations0.024 0.022 0.020 0.018 0.016 0.014 0.012 0.010 Function value Parkinsons HOLLM (max=0.2) HOLLM (max=0.5) HOLLM (max=0.7) HOLLM (max=1.0) 0 20 40 60 80 100 Number of evaluations0.00060 0.00055 0.00050 0.00045 0.00040 0.00035 0.00030 0.00025 0.00020 Function value Slice HOLLM (max=0.2) HOLLM (max=0.5) HOLLM (max=0.7) HOLLM (max=1.0) 0 20 40 60 80 100 Number of evaluations0.260 0.255 0.250 0.245 0.240 0.235 0.230 0.225 Function value Protein HOLLM (max=0.2) HOLLM (max=0.5) HOLLM (max=0.7) HOLLM (max=1.0)
|
https://arxiv.org/abs/2505.21372v1
|
Figure 10: Performance comparison of exploration parameter settings ( αmax∈ {0.2,0.5,0.7,1.0}) across the four FCNet benchmark tasks. Each curve represents the mean objective function value over 5 independent runs, with shaded regions denoting standard error. The results illustrate that optimal αmaxselection exhibits task-dependent behavior, reflecting the varying landscape characteristics and exploration requirements across different hyperparameter optimization problems. such as large language models accessed via API calls or hosted locally, increases in either M(which amplifies the number of inference calls per iteration) or k(which extends the number of output tokens generated per call) result in proportionally higher inference times and associated costs per optimization step. This creates a fundamental trade-off between optimization performance and computational efficiency that practitioners must carefully consider based on their specific resource constraints and performance requirements. Our default configuration of M= 5andk= 5, yielding 25 candidate evaluations per iteration, represents a calibrated compromise that balances exploration capability with computational practicality across the diverse benchmarks in our experimental evaluation. D Prompts We design structured prompts to LLMs for both candidate generation and evaluation prediction within our optimization framework. Our prompting strategy consists of two complementary components: a candidate generation prompt (Listing 3) that produces new solutions based on historical observations, and an evaluation prediction prompt (Listing 4) that estimates the quality of proposed candidates, or both in a single prompt. Each prompt follows a systematic structure: 1.Task specification: We provide a description of the optimization objective and establish the problem context, including the nature of the search space and optimization goals. 2.Dynamic constraints: We define the feasible region constraints (boundaries) derived from the current KD tree partitioning. These constraints are automatically computed based on the selected leaf nodes and translated into natural language descriptions that specify valid input ranges alongside example-evaluation pairs. 3.In-context examples: We supply the model with historical observations consisting of previously evaluated points and their corresponding objective function values. These examples serve as 21 demonstrations to guide the model’s understanding of the optimization landscape and desired output format. 4.Task-specific instructions: We provide explicit directives tailored to each prompt’s purpose. The candidate generation prompt instructs the model to propose new configurations , while the evaluation prediction prompt directs it to estimate performance for given candidates. Both prompts enforce structured JSON output formatting for automated parsing of model responses. Throughout our prompts, we employ placeholder variables denoted with $symbols to represent task-specific information that is dynamically populated during execution. Comprehensive examples and descriptions for each placeholder are provided in Table 3. For the NAS-Bench-201 experiments, we adopt a streamlined approach using a unified prompt structure (Listing 5) that simultaneously elicits both candidate proposals and their predicted evaluations in a single model query, reducing the computational overhead of separate generation and prediction phases. Table 3: Description of placeholders for candidate proposal and prediction prompts. Placeholder Description Example of Replaced Text $metrics The performance metrics for the specific task.F1 (lower is better) $region_constraints The allowable ranges or discrete values for the parameters in the configuration search space.{ lr: range(float([0.0, 0.9])), activation: choice(["relu", "tanh"]), num_layer: range(int([1, 20])) ... } $Region_ICL_examples Examples of
|
https://arxiv.org/abs/2505.21372v1
|
previously evaluated configurations and their performance metrics. These are the in-context learning examples.{ {lr: 0,4, activation: "relu", num_layer: 8,...} F1: 5.65 {lr: 0.03, activation: "tanh", num_layer: 8,...} F1: 3.23 ... } $target_number_of_can didatesThe number of new configurations that the candidate sampler should generate.15 $candidate_sampler_re sponse_formatThe required JSON structure for each new candidate configuration proposed by the sampler.{ lr: ?, activation: ?, num_layer: ? ... } $target_architectures The set of new configurations for which the surrogate model predicts the performance metrics.{ 1: {lr: 0,4, activation: "relu", num_layer: 8,...} 2: {lr: 0.03, activation: "tanh", num_layer: 8,...} ... } $surrogate_model_resp onse_formatThe required JSON structure for the performance prediction output. {F1: ? } 22 Suggest 100 random samples for 8 dimensions within the specified bounding box with a maximum of 3 decimal places . Bounding Box : x1_min : 0, x1_max : 1 x2_min : 0, x2_max : 1 x3_min : 0, x3_max : 1 x4_min : 0, x4_max : 1 x5_min : 0, x5_max : 1 x6_min : 0, x6_max : 1 x7_min : 0, x7_max : 1 x8_min : 0, x8_max : 1 Return the suggestions in the following JSON format exactly , without any additional text : [{" x1 ": float , "x2 ": float , "x3 ": float , "x4 ": float , "x5 ": float , "x6 ": float , "x7 ": float , "x8 ": float }] Listing 1: Prompt for LLMs simulating 100 8-D uniform random samples. Suggest 80 sample points in 2 dimensions within the specified bounding box . Bounding Box: x1_min : 0, x1_max : 1 x2_min : 0, x2_max : 1 such that they are clustered around the points given below . Points : Point 1: x1: 0.25 , x2: 0.25 Point 2: x1: 0.75 , x2: 0.75 Return the suggestions in the following JSON format exactly , without any additional text : [{" x1 ": float , "x2 ": float }] Listing 2: Prompt for LLMs sampling 80 points around the minima. # Optimization task ## Problem Description You are tasked with solving a optimization problem that requires finding optimal solutions . - ** Evaluation **: Configurations are measured by $metrics ## Constraints The allowable ranges for the hyperparameters are: $region_constraints ## Previously Evaluated Architectures Below are examples of architectures that have been evaluated , showing their operations and performance metrics : $Region_ICL_examples ## Your Task Generate $target_number_of_candidates new architecture configurations that : 1. Are likely to achieve lower $metrics than the examples 2. Are different from all previously evaluated architectures 3. Satisfy all the specified constraints : $region_constraints 23 ## Output Format Each configuration has to follow this format : $candidate_sampler_response_format Provide your response in a JSON list containing each proposed configuration . Return only the required JSON list output without additional text . Listing 3: Prompt template used for candidate points generation in the LLM_Generate function. # Configuration Performance Prediction ## Problem Description You are tasked with predicting the performance of configurations . - ** Evaluation Metric **: $metrics (to be predicted ) - ** Constraint **: The allowable ranges for the hyperparameter
|
https://arxiv.org/abs/2505.21372v1
|
are : $Region_ICL_examples ## Reference configurations with Known Performance Below are examples of configurations that have been evaluated , showing their operations and performance metrics : ## Candidate configurations to Evaluate You must predict performance for these new configurations : $target_architectures ## Your Task 1. Predict the $metrics value for each candidate configuration 2. Base your predictions on patterns in the reference examples ## Output Format Each evaluation has to follow this format : $surrogate_model_response_format Provide your response in a JSON list containing each proposed evaluation . Return only the required JSON list output without additional text . Listing 4: Prompt template used for performance prediction in the LLM_Generate function. Suggest 8 new candidate point (s) for maximizing a blackbox function in a 6- dimensional search space . Below are some examples of previously evaluated points with their corresponding function values : [ { "x1 ": 0.034 , "x2 ": 0.287 , "x3 ": 0.773 , "x4 ": 0.175 , "x5 ": 0.755 , "x6 ": 0.608 , " value ": -37.093 }, { "x1 ": 0.199 , "x2 ": 0.433 , "x3 ": 0.405 , "x4 ": 0.779 , 24 "x5 ": 0.186 , "x6 ": 0.594 , " value ": -37.84 }, { "x1 ": 0.447 , "x2 ": 0.342 , "x3 ": 0.97 , "x4 ": 0.087 , "x5 ": 0.115 , "x6 ": 0.533 , " value ": -44.52 }, { "x1 ": 0.949 , "x2 ": 0.127 , "x3 ": 0.659 , "x4 ": 0.546 , "x5 ": 0.049 , "x6 ": 0.265 , " value ": -33.067 } ] The search space is defined by the following bounding boxes : x1_min : 0.492 , x1_max : 1.000 x2_min : 0.000 , x2_max : 1.000 x3_min : 0.000 , x3_max : 1.000 x4_min : 0.000 , x4_max : 1.000 x5_min : 0.000 , x5_max : 1.000 x6_min : 0.000 , x6_max : 1.000 Based on the examples above , suggest candidate points that balance exploration ( sampling new regions ) with exploitation ( focusing on promising areas where function values are good ). Each candidate point must lie within the specified bounding boxes . In addition , predict an estimated function value for each candidate . Return the suggestions in the following JSON format exactly , without any additional text : [{" x1 ": float , "x2 ": float , "x3 ": float , "x4 ": float , "x5 ": float , "x6 ": float , " value ": float }] Listing 5: Prompt example used for simultaneous candidate generation and performance prediction in the LLM_Generate function for NAS-Bench-201. 25
|
https://arxiv.org/abs/2505.21372v1
|
arXiv:2505.21388v1 [cs.SI] 27 May 2025DeSocial: Blockchain-based Decentralized Social Networks Jingyuan Huang Rutgers UniversityXi Zhu Rutgers UniversityMinghao Guo Rutgers UniversityYongfeng Zhang∗ Rutgers University Abstract Web 2.0 social platforms are inherently centralized, with user data and algorithmic decisions controlled by the platform. However, users can only passively receive so- cial predictions without being able to choose the underlying algorithm, which limits personalization. Fortunately, with the emergence of blockchain, users are allowed to choose algorithms that are tailored to their local situation, improving prediction results in a personalized way. In a blockchain environment, each user possesses its own model to perform the social prediction, capturing different perspectives on social interactions. In our work, we propose DeSocial , a decentralized social net- work learning framework deployed on an Ethereum (ETH) local development chain that integrates distributed data storage, node-level consensus, and user-driven model selection through Ganache. In the first stage, each user leverages DeSocial to evaluate multiple backbone models on their local subgraph. DeSocial coordinates the execution and returns model-wise prediction results, enabling the user to select the most suitable backbone for personalized social prediction. Then, DeSocial uniformly selects several validation nodes that possess the algorithm specified by each user, and aggregates the prediction results by majority voting, to prevent errors caused by any single model’s misjudgment. Extensive experiments show that DeSocial has an evident improvement compared to the five classical centralized social network learning models, promoting user empowerment in blockchain-based decentralized social networks, showing the importance of multi-node validation and personalized algorithm selection based on blockchain. Our implementation is available at: https://github.com/agiresearch/DeSocial . 1 Introduction Social network learning algorithms have become a central tool for modeling and predicting user behavior in social networks [ 26,77,69,66,83], enhancing content search, advertising and rec- ommendation, and improve user experience across multiple platforms [ 91,6,68,77,64,14,59]. Despite impressive advances in graph-based recommendations [ 67,16,75,78,84,72,57,58], the main applications are still deeply rooted in the Web 2.0 paradigm of centralized platforms with exclusive control over user data and model deployment. The platform itself selects the predictive algorithms, trains the models, and provides recommendations to users without transparency or user involvement [ 17,12]. As a result, users are forced to accept predictions derived from a fixed model pipeline, regardless of whether the prediction fits their personal preferences or the structural context. This model limits personalization and does not reflect the diversity of the user’s local environments, especially in large graphs. Moreover, relying on a single model limits expressive power, even if that model is state-of-the-art. These challenges point to the need for frameworks that allow users to play a more active role in the algorithm selection and prediction process, especially in decentralized environments that effectively support model diversity and personalization. ∗Author Emails: {chy.huang, xi.zhu, minghao.guo, yongfeng.zhang}@rutgers.edu Preprint. Under review. Web 2.0 Social PlatformsUsers GCN Social FeedsWeb 2.0 Social Networks Web 3.0 Social Networks Personalized Algorithms GCN SGC Graph SAGEGCN SGC Graph SAGE Social Feeds Users UsersWeb 3.0 Social PlatformsFigure 1: The differences between Web 2.0 social networks and Web 3.0 social networks. In Web 2.0, users passively receive social feeds. In Web 3.0,
|
https://arxiv.org/abs/2505.21388v1
|
users receive feeds via personalized algorithms. By contrast, blockchain as a Web 3.0 technology is a promising alternative [ 19,42,63,76,2]. With their emphasis on decentralization [ 40,19], transparency [ 93,29] and verifiable interactions [ 47,30], blockchain-based systems offer a new paradigm for social, transactional and economic behavior that is inherently more user-centric and transparent [ 56,62,31,41]. Despite their rapid develop- ment, blockchain systems are not yet fully integrated with modern artificial intelligence approaches, especially in social network applications, where such synergies could be most impactful [ 19,56]. Therefore, there is a significant gap in applying graph learning algorithms to decentralized social prediction tasks, which provides new capabilities for building more personalized, user-driven, and decentralized social network systems. To overcome this gap, we propose DeSocial framework that allows users to regain control over the forecasting process on their own behalf. In DeSocial , each user has their rights to select the most favorable prediction model from a library of graph learning networks (e.g., MLP [ 51], GCN [ 24], GAT [ 55], GraphSAGE [ 13], or SGC [ 64]). This selection is based on neighborhood sampling evaluation to ensure that the selected model best captures the user’s local environment. To ensure reliability, DeSocial applies a smart contract that is transparent to every nodes: a group of validators are selected to run their models independently, evaluate the link prediction query, and make the decision together by a majority vote. This approach not only increases the robustness of the predictions, but also removes single-point algorithmic control, aligning outcomes with both user intent and system integrity. Our contributions can be summarized as follows: •Problem Formulation. We propose a novel task setting where link predictions emerge a decentral- ized consensus among validators rather than computed by a central model. Each validator runs their own graph learning backbone and a majority vote mechanism determines the final prediction. This formulation captures realistic constraints in blockchain environments, such as data locality, validator trust boundaries, and transparent observability. •Novel Framework. We propose DeSocial , a novel framework that integrates blockchain infras- tructure with graph learning for decentralized social network prediction. It enables personalized model selection, user-driven validator community formation, and majority-vote consensus, aligning with the logic of real-world blockchain protocols while improving social network predictions. DeSocial is deployed on an ETH local development chain environment. •Extensive Evaluations. We conduct comprehensive experiments on four representative graph datasets spanning the domains of Web 3.0 transaction networks, email communication graphs, and interest-based social networks. Our results show that DeSocial outperforms all five classic central- ized baselines in terms of link prediction accuracy, demonstrating the superiority of decentralized graph learning algorithms in the blockchain context. 2 Related Works 2.1 Graph Learning for Social Networks Existing graph learning methods for social networks are advanced, including graph neural networks (GNN) [ 24,55,13,64,71,48,14,26,52,43,59], meta-learning frameworks [ 91,6,83,77,57], Transformer-based [ 66,65,82], and large language model (LLM)-based [ 34,53,4,60,80,15,85,92], most of which are designed for centralized environments, where the full graph structure and node features are aggregated on a single server and optimized jointly. In contrast, our setting assumes a
|
https://arxiv.org/abs/2505.21388v1
|
completely decentralized environment where each user node has full access to the local view of the 2 social network structure, but raw data cannot be shared across nodes [ 87,45,37,46]. Furthermore, the storage and computational capabilities of each node are limited, making it infeasible to deploy large-scale models such as LLM-based encoders [ 28,53,20,49] or even invoke external LLM APIs [ 73,33,36,74,89,86,22,73,23,21]. This practical limitation leads us to limit each node to implementing a lightweight graph foundation. Therefore, we selected five representative and well-studied models: MLP [ 51], GCN [ 24], GAT [ 55], GraphSAGE [ 13], and SGC [ 64], as candidate backbones for our framework. These backbones achieve a practical balance between representational power and computational efficiency, which is sufficient to test the effectiveness of model selection and consensus in decentralized graph learning. 2.2 Blockchain Consensus Mechanism Blockchain consensus mechanisms enable distributed agreement without central control [ 39,42,63]. The original Bitcoin blockchain relied on Proof-of-Work (PoW) [ 42] consensus, where the longest chain determines the accepted state. While PoW provides an open network, it is energy intensive and only provides probabilistic finiteness. More relevant to our context are the consensus protocols Proof-of-Stake (PoS) [ 63,76,2], Proof-of-Authority [ 1,2], and Byzantine Fault Tolerance (BFT) [ 27, 35,54], which achieve finality by voting among selected validators. However, these methods do not achieve complete decentralization [9, 18], or a large number of validators are selected to participate in the verification, which greatly reduces the operation speed of the network [ 81]. These concerns of scalability, centralization, and running efficiency bring challenges to blockchain-based decentralized social network algorithm design. Therefore, a limited set of validators are selected to participate in the consensus for each verification in DeSocial , which guarantees the efficiency, robustness, and complete decentralization of the network. 2.3 Ensemble Learning and Majority Voting Ensemble learning that combines multiple models to improve predictions is well-recognized in machine learning [ 7,3,11,38,50]. In graph learning, ensemble methods help mitigate overfitting to specific topologies or noisy subgraphs, especially when models are trained on different views of the graph or initialized with different parameters [ 8,90,5]. V oting is essential in blockchain consensus process because it enables nodes to collectively agree on the validity of transactions [70]. Similarly, voting serves as a mechanism for aggregating decisions from multiple nodes in decentralized graph learning, ensuring that predictions are agreed upon through collective decision-making rather than centralized authority. Common voting strategies includes soft voting [ 61,79], where models make probabilistic predictions and use averages, and hard voting [ 10,61,79], where each model casts a binary vote and the final outcome is the majority. DeSocial adopts hard voting since it aligns naturally with blockchain consensus mechanisms, where validations are made through majority approval. V oting-based decisions are also more robust and less vulnerable to manipulation. 3 Problem Definition We formally define the decentralized temporal link prediction task on social graphs. We aim to fill the gap between centralized graph learning and blockchain-based decentralization. Every node has access to the complete structured information, since in blockchain, the formation of social relations
|
https://arxiv.org/abs/2505.21388v1
|
corresponds to the successful validation of a transaction and its subsequent broadcasting to all nodes. Definition 3.1 (Temporal Graph) .A temporal graph can be formally introduced as Gt= (Vt,Et) whereVt,Etdenotes a set of Nnodes and a set of directed edges at time t, respectively. Definition 3.2 (Node-Specific Backbone) .We denote the backbone used by node u, asfΘu∈ F where uis a node and Θuis the parameters of its backbone, F={F1,F2, ...}is the backbone pool. Definition 3.3 (Vote) .A vote is the boolean decision made by a validator ϕ, denoted as Vote(ϕ, p, q, t )∈ {0,1}, indicating whether ϕagree with the link (p, q)∈ Gt. Definition 3.4 (Verification) .A verification by a group of validators Φp,q,t for validating the link (p, q)at time t, can be formally defined as: Ver(Φp,q,t, p, q, t ) =Mojority (V ote(ϕ, p, q, t )), ϕ∈Φp,q,t (1) where Mojority (·)is the majority of a list of decisions. 3 The goal of our task is to predict whether a future link (u, v), u, v∈ V, will be formed in Gt+1.V denotes the full node set. Unlike classical settings where the entire graph and model are centrally managed, we study this task in a decentralized, blockchain-based environment . In this environment, the link prediction of (u, v)is regarded as a transaction verification requested by u, targeted at v. At time t, each validator ϕuses its backbone model fΘϕwith parameter Θϕ, as well as its temporal graph data Gt, to compute model output vectors of the initiator zT pand the target zq. We also sample a negative target set Neg(ϕ, p, q, t )to indicate false links (p, q′)where q′∈Neg(ϕ, p, q, t ), to simulate multiple spendings in blockchain, for the comparison of prediction probability. The decision can be interpreted as the comparisons among the cosine similarity of zT pandzr, r∈ {q} ∪Neg(ϕ, p, q, t ). Therefore, the decision can be formally defined as: V ote(ϕ, p, q, t ) = I(\ q′∈Neg(ϕ,p,q,t )σ(zT pzq ∥zTp∥∥zq∥;fΘϕ(Dt))> σ(zT pz′ q ∥zTp∥∥z′q∥;fΘϕ(Dt))) (2) where σis the sigmoid function. When a decentralized network receives a verification request (u, v)at time t, the network selects a set of validators Φ. Each validator ϕi∈Φmakes a decision V ote(ϕi, u, v, t )based on its backbone model fΘuand temporal graph data Dt=St τ=0Gτ. Definition 3.5 (Decentralized Learning on Temporal Graphs) .Decentralized learning involves all validator nodes at current time period collectively participating in the prediction of the graph structure for the next time step, is formulated as two steps: {Θt−1 u|u∈ Vt val}test− − → Gt(3) min{Loss(ˆGt,Gt; Θt u)}for each u∈ Vt val,Vt val=[ (p,q)∈GtΦp,q,t (4) Instead of optimizing a centralized model Θtand test the next period of graph Gt, decentralization utilizes model parameters of all nodes to test Gt, and all validator nodes u∈ Vt valin current period t optimize their model parameters Θt uindependently. This problem has three unique challenges: •No global optimization : There is no centralized end-to-end training because of the distributed storage of data and backbone models. •No shared parameters : Validators may use different models with
|
https://arxiv.org/abs/2505.21388v1
|
different inductive biases. •Consensus under variance : Final decisions must tolerate noise, diversity among validators. 4 Our Framework 4.1 Framework Overview Our framework DeSocial comprises two modules: (1) a personalized algorithm selection mechanism that allows users to select their algorithms for their prediction requests, and (2) a decentralized consensus voting scheme among validators of the chosen backbone to produce the final output. We also describe the architecture of DeSocial that orchestrates these modules on a blockchain platform. Figure 2 illustrates the personalized algorithm selection. A user prequests the blockchain to that it wants to select an algorithm from the pool F. In the meantime, pforms a historical neighbor validation exam, sampling a set of historical neighbors from G0,G1, ...,Gt. The blockchain then asks nodes r1, r2, ..., r |F|with each backbone in Fto take this exam, i.e., predicting the links between p. Then each rireturns its exam result to pvia blockchain. Finally, pselects the algorithm with the best exam result, as its personalized algorithm Fp. Figure 3 illustrates the decentralized consensus voting scheme. Each user pi,(pi, qi)∈ Gt+1first sends a request to the blockchain to validate qi. After that, the blockchain uniformly samples nnodes, all of which applied Fpi, to build a validation community Φpi,qi,t. Each ϕj∈Φpi,qi,tconstructs a prediction task by sampling different negative edges (pi, q′ i), q′ i∈Neg(ϕj, pi, qi, t), using its model fΘϕjto predict the edges, and voting the one with highest probability. (pi, qi)is predicted as true if more than half of the validators in Φpi,qi,tagrees it. 4 Personalized Algorithm Selection Through Blockchain Selects Nodes Using Algorithm in 𝐹 Graph SAGE Nodes Selected with Candidate Algorithms = GCN = SGC =Graph SAGE = GAT Samples from 𝐺0,𝐺1,…,𝐺𝑡Historical Neighbor Validation Exam by User 𝒑 ? ? ? ? ? ?User 𝑝 Backbone Algorithm Nodes Take The Neighbor Validation Exam & Return Exam Result to User 𝒑 User 𝒑 Selects the Algorithm with Best Result Figure 2: The personalized algorithm selection module allows user pto choose an algorithm from F. In this case, F={GCN,GAT,GraphSAGE ,GAT}. 4.2 Personalized Backbone Algorithm Selection To illustrate that each user can independently select a social network algorithm, our framework enables users to make use of their local structural information into Algorithm 1. Given a set of backbone algorithm F,DeSocial allows each node u∈ V selects an algorithm Fu∈ F for each u. Algorithm 1: : Personalized Algorithm Selection Input: Backbone pool set F, neighborhood set size γ, adjust coefficient α, and graph data Dt Output: The personalized algorithm list {Fu|(u, v)∈ Gt+1} foru∈ {u|(u, v)∈ Gt+1}do Calculate Γby Eq. 5; for(vp, vn)∈Γdo Calculate Πu,vpgiven teandα; end urequests the blockchain by the smart contract and finds r1, r2, ..., r |F|; fori∈ {1,2, ...,F}do for(vp, vn)∈Γdo ricalculates zu,zvp,zvngiven fΘri(Dt); ricalculates the probability of (u, vp)and(u, vn); end rireturns the weighted sum of the probability comparison via blockchain by the smart contract; end uselects Fuby Eq. 6; end return {Fu|(u, v)∈ Gt+1} Specifically speaking, at time t, given a node u, we sample its positive and negative neighbor pairs Γ ={(vp, vn)| t[ τ=0vp∈ Nt(u)!^ t[
|
https://arxiv.org/abs/2505.21388v1
|
τ=0vn/∈ Nt(u)! } (5) with a size of γ. We select Futhrough Eq. 6, where Πu,vp= exp( α∗(t−te)), denoting the edge weights for algorithm selection. The edges that emerges later has greater weights. teis the emerge time of (u, v), andαis the adjust coefficient. By leveraging the local subgraph and blockchain, users can choose models that best fit their individual network context. This selection improves validation success rates and enhances the overall robustness and performance of the social network prediction. Fu= arg max f∈FX (vp,vn)∈ΓI σ(zT uzvp ∥zTu∥∥zvp∥;f(Dt))> σ(zT uzvn ∥zTu∥∥zvn∥;f(Dt)) Πu,vp (6) 5 Blockchain -based Decentralized Social Network Prediction Link Prediction Results User 𝑝1 User 𝑞1 User 𝑝2 User 𝑞2 User 𝑝3 User 𝑞3 User 𝑝4 User 𝑞4 User 𝑝5 User 𝑞5 Decentralized Consensus Voting Scheme SGC Validator Community SAGE Validator Community GCN Validator Community Link Prediction Tasks Constructed by Each ValidatorUniform SampleAggregation by Majority Voting : := 3:2 = 2:3 : := 5:0 = 4:1 : = 4:1 Links to Predict on 𝑮𝒕+𝟏 User 𝑝1 User 𝑞1 ? User 𝑝2 User 𝑞2 ? User 𝑝3 User 𝑞3 ? User 𝑝4 User 𝑞4 ? User 𝑝5 User 𝑞5 ? SGC SGC SAGE SAGE GCN Return the Majority Voting Result via Blockchain 𝑝1 𝑝2 𝑝3 𝑝4 𝑝5𝑞1 𝑞2𝑞1′𝑞2′ 𝑞3𝑞3′𝑞4𝑞4′ 𝑞5𝑞5′ Figure 3: Decentralized consensus voting in DeSocial . To predict Gt+1, users pirequest to predict links with qiusing personalized algorithms. The blockchain samples n=5validators with matching algorithms to predict and finalizes the result via majority voting through a smart contract. 4.3 Decentralized Consensus Voting Scheme In decentralized prediction, the prediction of a single validator may be affected by local noise or model variance. With multiple validators, some errors made by a single validator can be tolerated through the majority voting mechanism. DeSocial conducts a consensus process between validators in the communities by Algorithm 2, incorporating with a smart contract deployed in the blockchain. Algorithm 2: : Blockchain-based Decentralized Social Network Learning Input: Temporal graph data Dt, validator set size n, randomly initialized parameters of each node Θu, backbone pool F Output: Model parameters {Θ⋆ u|u∈ V} fortime period t∈ {0,1, ..., T−1}do # Validation committee sampling Calculate Φp,q,t+1,∀(p, q)∈ Gt+1by Eq. 7 by requesting the blockchain by the smart contract; Calculate Vt valby union all the Φp,q,t+1; # Local inference while∃u∈ Vt val,Θudoes not converge do foru∈ Vt valdo Predict the future graph ˆGt+1given ΘuandDt; Calculate the prediction loss L(ˆGt+1,Gt+1); Optimize ΘugivenL(ˆGt+1,Gt+1); end end # Aggregation decisions for(p, q)∈ Gt+1do foru∈Φp,q,t+1do usends V ote(Φp,q,t+1, p, q, t + 1) to blockchain by the smart contract; end The smart contract calculates V er(Φp,q,t+1, p, q, t + 1) by Eq. 1 ; end end ∀u∈ V,Θ⋆ u←Θu; return {Θ⋆ u|u∈V}; Validation Committee Sampling : To predict a social connection (pi, qi)at time t+1, the blockchain selects nnodes using Fpiand form a validation committee Φpi,qi,t+1, as defined in Eq. 7 by the smart contract, where VFpidenotes the nodes using Fpi.Φpi,qi,t+1is fixed at tand specified by Fpi. Φpi,qi,t+1∼UniformSample (VFpi, n) (7) 6 Local Inference : Each selected validator ϕj∈Φpi,qi,t+1copies Dtto its local memory to run
|
https://arxiv.org/abs/2505.21388v1
|
Fϕj locally and independently. Each ϕjpredicts the edges (pi, qi)and(pi, q′ i), q′ i∈Neg(ϕ, pi, qi, t), making a binary decision V ote(ϕj, pi, qi, t+ 1) through Eq. 2. Each ϕjthen sends its vote to the blockchain by the smart contract. Aggregation : The smart contract initiates a roll call procedure that collects the individual validation outcomes from the selected committee members ϕ1, ϕ2, . . . , ϕ n. Then, it makes a summation of V ote(ϕj, pi, qi, t+ 1) . As shown in Eq. 1, the smart contract returns a positive decision if more than half of the committee agrees, i.e.,Pn i=1V ote(ϕj, pi, qi, t+ 1)>⌊n 2⌋. 5 Experiments 5.1 Experimental Setups We use four real-world temporal graph datasets in the scope of Web 2.0 ( UCI [44],Enron [88], and GDELT [88]) and Web 3.0 ( Memo-Tx [94]). In order to evaluate the performance of DeSocial , we used the five most classic centralized models ( MLP [51],GCN [24],GAT [55],GraphSAGE [13], andSGC [64]) in the field of social network learning as baselines for comparison. Detailed informa- tion for the datasets and baselines is described in Appendix A.1 and Appendix A.2, respectively. 5.2 Evaluation Metrics Given the decentralized nature of our problem formulation, we employ evaluation metrics tailored to assess the effectiveness of DeSocial in decentralized environments, enlightened by the validation of the double spending problem on blockchain [ 32,25]. That is, they lack access to the underlying intent of a transaction and can only judge based on structural context. This motivates our use of Acc@K , a set of relative metrics that evaluates whether the predicted positive link ranks higher than its sampled alternatives. Given a link prediction task that each test case consists of one positive edge and K−1negative edges, Acc@K indicates the probability that the model assigns the highest score to the positive edge among the Kcandidates. We adopt K∈ {2,3,5}to evaluate the performance of decentralized graph learning. The greater Kis, the harder evaluation tasks will be. We follow [ 88] for the randomized negative sampling method. Appendix A.3 provides our experiment setups and implementation details. 5.3 Impacts of the Personalized Algorithms Acc@2 Acc@3 Acc@5304050607080UCI Acc@2 Acc@3 Acc@5405060708090Memo-Tx Acc@2 Acc@3 Acc@560708090Enron Acc@2 Acc@3 Acc@560708090100GDELT MLP GCN GAT SAGE SGC Random Rule PA (Ours) Figure 4: Comparison of the performance among different centralized methods, random selection, simple rule-based selection, and DeSocial PAon Acc@2, Acc@3, Acc@5 for each dataset. Figure 4 reports the performance of five centralized baselines, random selection, simple rule-based selection, and our personalized algorithm selection method DeSocial PAacross multiple datasets and evaluation metrics. In hybrid settings, each validator uses a distinct backbone. Random selection represents that every node selects an algorithm in F. Simple rule-based selection represents that every node selects an algorithm in Fbased on its two features (degree and clustering) of the local structure. Table 1 and Table 2 shows the detailed statistics of the centralized baselines and DeSocial PA. Appendix A.5 shows the simple rule of algorithm selection. The results highlight the importance of allowing users to select personalized models in
|
https://arxiv.org/abs/2505.21388v1
|
decentralized settings. Compared to two simpler hybrid baselines, random selection and rule-based selection, DeSocial PAconsistently delivers higher performance across most datasets and evaluation metrics. Specifically, DeSocial PAoutperforms all centralized baselines in all three metrics for UCI and 7 Memo-Tx, and shows gains in Enron for Acc@3 and Acc@5 (average gain 1.18% against the strongest centralized baseline). Even in GDELT and Acc@2 on Enron, where performance is already saturated, DeSocial PAachieves comparable results without degradation (within 0.25%), demonstrating its robustness. These results realize the motivation behind DeSocial PA: allowing each user to select the most suitable model based on their local context is more effective than relying on random or hand-crafted strategies and thus enhances the overall prediction. Table 1: Mean and standard deviation (%) of Acc@2 and Acc@3 in the centralized and decentralized settings. The purple boxes, DeSocial PA, and DeSocial Full represents multi-validator consensus, personalized algorithm selection, and both, respectively. The best centralized scores are underlined; improvements from decentralized methods are in bold. Gain measures the performance gap between the best decentralized and best centralized methods. Model Metric UCI Memo-Tx Enron GDELT MLPAcc@2 66.38±0.34 73.61±0.11 81.48±0.08 91.28±0.02 Acc@3 52.52±0.30 66.57±0.11 75.20±0.09 87.61±0.02 DeSocial MLPAcc@2 71.03±0.67 74.83±0.14 83.18±0.10 94.04±0.03 Acc@3 53.94±0.58 67.11±0.14 75.97±0.10 91.54±0.04 GCNAcc@2 63.90±0.17 69.62±0.13 79.92±0.09 82.94±0.04 Acc@3 51.90±0.19 61.21±0.13 74.41±0.09 74.08±0.06 DeSocial GCNAcc@2 66.89±0.47 75.03±0.18 81.95±0.18 90.57±0.03 Acc@3 53.54±0.43 69.17±0.15 78.57±0.15 84.96±0.07 GATAcc@2 61.15±0.26 72.51±0.26 85.52±0.12 88.29±0.28 Acc@3 48.24±0.28 65.86±0.18 80.30±0.14 81.34±0.39 DeSocial GATAcc@2 63.79±0.46 73.42±0.32 87.51±0.13 94.09±0.20 Acc@3 48.01±0.52 68.28±0.21 84.29±0.12 89.52±0.21 SAGEAcc@2 69.00±0.40 82.85±0.15 90.27±0.06 93.16±0.02 Acc@3 55.78±0.48 75.47±0.16 86.16±0.07 89.59±0.03 DeSocial SAGEAcc@2 73.31±0.65 85.84±0.21 92.18±0.08 95.68±0.03 Acc@3 56.22±0.72 76.91±0.21 88.17±0.13 93.14±0.04 SGCAcc@2 72.77±0.24 80.37±0.05 88.24±0.04 95.59±0.02 Acc@3 62.77±0.24 74.78±0.07 84.50±0.08 92.46±0.02 DeSocial SGCAcc@2 76.37±0.44 83.16±0.07 89.62±0.06 98.12±0.02 Acc@3 65.62±0.39 79.19±0.09 86.13±0.09 96.11±0.03 DeSocial PAAcc@2 73.35±0.27 83.96±0.13 90.08±0.11 95.57±0.02 Acc@3 63.16±0.30 77.28±0.12 86.36±0.08 92.44±0.02 DeSocial FullAcc@2 77.63±0.36 87.25±0.17 92.11±0.13 98.13±0.03 Acc@3 66.01±0.44 79.65±0.16 88.39±0.13 96.09±0.04 Gain(%)Acc@2 6.68 5.31 2.12 2.66 Acc@3 5.16 5.54 2.59 3.95 5.4 Impacts of the Multiple Validators We first examine Acc@2 and Acc@3 to evaluate the effect of decentralized consensus voting. As shown in Table 1, introducing a 5-validator committee under a single backbone consistently improves performance across most datasets, with average gains of 3.36% in Acc@2 and 3.18% in Acc@3. The only notable drop occurs for GAT on UCI on Acc@3, where the baseline performs below random (i.e., below 0.5), making aggregation more likely to amplify errors. In the more difficult Acc@5 task shown in Table 2, although most backbones still benefit from ensemble voting, four centralized models on UCI fall below 0.5 and further degrade after aggregation, due to the amplification of error. Despite this, consistent improvements on other datasets highlight the utility of decentralized voting in reducing variance and correcting individual prediction noise. 3 5 7 9 11 #Validators3.04.05.06.07.08.0Gain (%) UCI 3 5 7 9 11 #Validators2.03.04.05.06.07.08.09.0Gain (%) Memo-Tx 3 5 7 9 11 #Validators0.00.51.01.52.02.53.0Gain (%) Enron 3 5 7 9 11 #Validators2.03.04.05.06.07.08.09.010.0 Gain (%) GDELT SGC GraphSAGE GCN Figure 5: Gains versus number of validators. We vary the validator committee size n∈ {3,5,7,9,11} and report the corresponding gain in
|
https://arxiv.org/abs/2505.21388v1
|
prediction accuracy. Gains converges as nincreases to 9. We further analyze how the performance gain from multiple node consensus over a single centralized backbone varies with the number of validators, from 3 to 11, as shown in Figure 5. As the committee size increases, the marginal improvement diminishes, and the gain converges around nine validators. DeSocial Full integrates both modules, and further gained performance on most of the tasks. This suggests that personalized algorithm selection can effectively complement decentralized voting in improving prediction accuracy. On the dataset GDELT, however, the benefit is less pronounced due to the high overall accuracy of individual models and the relatively homogeneous nature of the graph. 8 Table 2: Mean and standard deviation (%) of Acc@5 in the centralized and decentralized settings. Model Metric UCI Memo-Tx Enron GDELT MLP Acc@5 38.10±0.27 59.57±0.09 68.53±0.09 82.11±0.03 DeSocial MLP Acc@5 37.03±0.46 60.37±0.10 69.96±0.10 86.81±0.05 GCN Acc@5 39.51±0.20 48.91±0.15 67.32±0.10 60.29±0.10 DeSocial GCN Acc@5 39.47±0.50 54.73±0.21 74.10±0.11 68.25±0.15 GAT Acc@5 36.13±0.28 58.27±0.13 74.03±0.22 70.81±0.54 DeSocial GAT Acc@5 34.34±0.43 62.39±0.15 80.99±0.20 79.69±0.36 SAGE Acc@5 42.92±0.47 67.34±0.16 81.36±0.09 84.45±0.03 DeSocial SAGE Acc@5 41.36±0.59 67.58±0.21 83.38±0.15 88.76±0.05 SGC Acc@5 52.06±0.24 67.65±0.07 80.18±0.12 87.60±0.03 DeSocial SGC Acc@5 53.91±0.40 72.92±0.12 82.14±0.10 92.15±0.04 DeSocial PA Acc@5 52.29±0.33 69.66±0.14 81.93±0.12 87.58±0.04 DeSocial Full Acc@5 54.14±0.49 72.72±0.14 84.28±0.15 92.12±0.05 Gain(%) Acc@5 3.80 7.79 3.59 5.19 5.5 Efficiency Analysis Table 3: Efficiency analysis of the run time (s) of the whole process at one test period (100 epochs) at UCI and Memo-Tx, comparing decentralized methods with centralized ones. Rounded for 4 sig. figs. Centralized Decentralized (Amortized)DatasetMLP GCN GAT SAGE SGC DeSocial PA DeSocial V ote DeSocial Full UCI 44.08 54.48 140.5 51.79 46.43 54.89 47.19 55.28 Memo-Tx 4257 2402 20750 2011 439.8 2016 2021 2021 To evaluate the runtime efficiency of DeSocial , we compare five centralized models with three de- centralized variants: personalized algorithm selection alone (DeSocial PA), multiple-validator voting alone (DeSocial V ote, referring to the DeSocial Fwith highest metrics. i.e., DeSocial SGC for UCI, and DeSocial SAGE for Memo-Tx), and both configuration (DeSocial Full). All decentralized variants account for the additional computational overhead introduced by the ETH Ganache infrastructure, which serves as a local blockchain emulator to simulate on-chain operations such as user requests, validator selections, validator votings, and decision aggregations. All graph learning processes are executed on an off-chain local server to simulate DeSocial in a single-machine environment, with each graph model stored independently. We use Ganache to capture blockchain-side latency and execution cost in decentralized social predic- tion processes, avoiding the complexity of deploying to a full public ETH testnet. For decentralized methods, runtime is measured from a single user’s perspective. That is, starting from a social link request and ending to the decision aggregated. Appendix B shows the details of calculating the run time of each decentralized method. As shown in Table 3, the amortized runtime of DeSocial is not significantly impacted by the integration of the ETH local development chain. Our experiments are conducted in a single machine simulation where both graph training and blockchain interactions are executed serially. Appendix A.3 shows the
|
https://arxiv.org/abs/2505.21388v1
|
details of the computation device. In a real ETH network deployment with multiple participating users, the system would better exploit parallelism and further improve the efficiency. 6 Conclusions We extended the prediction of social networks to a decentralized setting and implemented our DeSocial framework with the Web 3.0 blockchain technology. DeSocial leverages the decentral- ized nature of blockchain to enable user-driven algorithm selection and validator-level majority voting for social prediction. By allowing each user to select the most suitable backbone model based on their local subgraph, and aggregating predictions from multiple independently hosted validators, DeSocial overcomes the limitations of centralized frameworks and consistently improves link pre- diction performance. Our results highlight the value of personalized model choice and decentralized consensus in building personalized and robust social network algorithm on blockchain infrastructures. Limitations : Although our framework demonstrates clear improvements in social link prediction, it also highlights several open challenges that present opportunities for future research. First, blockchain testnets are easy to deploy but inefficient, while real chains like ETH offer higher throughput yet require multi-machine coordination, making them difficult to simulate on a single machine. Secondly, future work can apply stronger graph learning backbones and more sophisticated validation methods to further improve social network predictions. 9 References [1] Binance. Proof of authority explained, 2018. [2] Binance US. Binance us, 2025. [3] Leo Breiman. Bagging predictors. Machine learning , 24:123–140, 1996. [4]Runjin Chen, Tong Zhao, Ajay Kumar Jaiswal, Neil Shah, and Zhangyang Wang. Llaga: Large language and graph assistant. In Forty-first International Conference on Machine Learning, ICML 2024, Vienna, Austria, July 21-27, 2024 . OpenReview.net, 2024. [5]Yen-Liang Chen, Chen-Hsin Hsiao, and Chia-Chi Wu. An ensemble model for link prediction based on graph embedding. Decision Support Systems , 157:113753, 2022. [6]Sihao Ding, Fuli Feng, Xiangnan He, Yong Liao, Jun Shi, and Yongdong Zhang. Causal incremental graph convolution for recommender system retraining. IEEE Transactions on Neural Networks and Learning Systems , 2022. [7]Xibin Dong, Zhiwen Yu, Wenming Cao, Yifan Shi, and Qianli Ma. A survey on ensemble learning. Frontiers of Computer Science , 14:241–258, 2020. [8]Rui Duan, Chungang Yan, Junli Wang, and Changjun Jiang. Graph ensemble neural network. Information Fusion , 110:102461, 2024. [9]Shahriar Fahim, S Katibur Rahman, and Sharfuddin Mahmood. Blockchain: A comparative study of consensus algorithms pow, pos, poa, pov. Int. J. Math. Sci. Comput , 3(1):46–57, 2023. [10] Michael J Franklin, Donald Kossmann, Tim Kraska, Sukriti Ramesh, and Reynold Xin. Crowddb: answering queries with crowdsourcing. In Proceedings of the 2011 ACM SIG- MOD International Conference on Management of data , pages 61–72, 2011. [11] Yoav Freund and Robert E Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences , 55(1):119–139, 1997. [12] Yingqiang Ge, Shuchang Liu, Zuohui Fu, Juntao Tan, Zelong Li, Shuyuan Xu, Yunqi Li, Yikun Xian, and Yongfeng Zhang. A survey on trustworthy recommender systems. ACM Transactions on Recommender Systems , 3(2):1–68, 2024. [13] William L. Hamilton, Rex Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Proceedings of the 31st International Conference on Neural Information Processing Systems ,
|
https://arxiv.org/abs/2505.21388v1
|
NIPS’17, page 1025–1035, 2017. [14] Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, Yongdong Zhang, and Meng Wang. Lightgcn: Simplifying and powering graph convolution network for recommendation. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval , pages 639–648, 2020. [15] Xiaoxin He, Xavier Bresson, Thomas Laurent, Adam Perold, Yann LeCun, and Bryan Hooi. Harnessing explanations: Llm-to-lm interpreter for enhanced text-attributed graph representation learning. arXiv preprint arXiv:2305.19523 , 2023. [16] Liwei Huang, Yutao Ma, Yanbo Liu, Bohong Danny Du, Shuliang Wang, and Deyi Li. Position- enhanced and time-aware graph convolutional network for sequential recommendations. ACM Transactions on Information Systems , 41(1):1–32, 2023. [17] Dietmar Jannach, Sidra Naveed, and Michael Jugovac. User control in recommender systems: Overview and interaction challenges. In Derek Bridge and Heiner Stuckenschmidt, editors, E- Commerce and Web Technologies , pages 21–33, Cham, 2017. Springer International Publishing. [18] Seungwon Jeong. Centralized decentralization simple economics of the dpos blockchain governance. Applied Economics Letters , pages 1–6, 2024. [19] Ujun Jeong, Lynnette Hui Xian Ng, Kathleen M Carley, and Huan Liu. Navigating decentralized online social networks: An overview of technical and societal challenges in architectural choices. arXiv preprint arXiv:2504.00071 , 2025. 10 [20] Bowen Jin, Gang Liu, Chi Han, Meng Jiang, Heng Ji, and Jiawei Han. Large language models on graphs: A comprehensive survey. IEEE Transactions on Knowledge and Data Engineering , 36(12):8622–8642, 2024. [21] Mingyu Jin, Weidi Luo, Sitao Cheng, Xinyi Wang, Wenyue Hua, Ruixiang Tang, William Yang Wang, and Yongfeng Zhang. Disentangling memory and reasoning ability in large language models. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics , 2025. [22] Mingyu Jin, Kai Mei, Wujiang Xu, Mingjie Sun, Ruixiang Tang, Mengnan Du, Zirui Liu, and Yongfeng Zhang. Massive values in self-attention modules are the key to contextual knowledge understanding. In Forty-second International Conference on Machine Learning , 2025. [23] Mingyu Jin, Qinkai Yu, Jingyuan Huang, Qingcheng Zeng, Zhenting Wang, Wenyue Hua, Haiyan Zhao, Kai Mei, Yanda Meng, Kaize Ding, Fan Yang, Mengnan Du, and Yongfeng Zhang. Exploring concept depth: How large language models acquire knowledge and concept at different layers? In Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, and Steven Schockaert, editors, Proceedings of the 31st International Conference on Computational Linguistics , pages 558–573, Abu Dhabi, UAE, January 2025. Association for Computational Linguistics. [24] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 , 2016. [25] Abhishek Kumar, Bashant Kumar Sah, Tushar Mehrotra, and Gaurav Kumar Rajput. A review on double spending problem in blockchain. In 2023 International Conference on Computational Intelligence and Sustainable Engineering Solutions (CISES) , pages 881–889. IEEE, 2023. [26] Srijan Kumar, Xikun Zhang, and Jure Leskovec. Predicting dynamic embedding trajectory in temporal interaction networks. In KDD ’19 , 2019. [27] Aptos Labs. The aptos blockchain: Safe, scalable, and upgradeable web3 infrastructure. https://aptosfoundation.org/whitepaper/aptos-whitepaper_en.pdf , 2022. [28] Chaoliu Li, Lianghao Xia, Xubin Ren, Yaowen Ye, Yong Xu, and Chao Huang. Graph trans- former for recommendation. In Proceedings of the 46th International ACM SIGIR Conference on
|
https://arxiv.org/abs/2505.21388v1
|
Research and Development in Information Retrieval , SIGIR ’23, page 1680–1689, New York, NY , USA, 2023. Association for Computing Machinery. [29] Zihao Li, Jianfeng Li, Zheyuan He, Xiapu Luo, Ting Wang, Xiaoze Ni, Wenwu Yang, Xi Chen, and Ting Chen. Demystifying defi mev activities in flashbots bundle. In Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security , pages 165–179, 2023. [30] Jing Liu and Zhentian Liu. A survey on security verification of blockchain smart contracts. IEEE access , 7:77894–77904, 2019. [31] Kai Liu, Minghao Yu, Yang Jin, Yue Wang, Jiaqi Yan, and Xiao Fan Liu. Tokenomic model of friend. tech social platform: A data-driven analysis. In 2023 IEEE International Conference on Data Mining Workshops (ICDMW) , pages 656–662. IEEE, 2023. [32] Weikang Liu, Bin Cao, and Mugen Peng. Two-tier multi-zone consensus: Enable intelligence sharing for aiot with enhanced security. In 2024 IEEE Annual Congress on Artificial Intelligence of Things (AIoT) , pages 232–238. IEEE, 2024. [33] Xinyi Liu, Ruijie Wang, Dachun Sun, Dilek Hakkani Tur, and Tarek Abdelzaher. Uncovering cross-domain recommendation ability of large language models. In Companion Proceedings of the ACM on Web Conference 2025 , WWW ’25, page 2736–2743, New York, NY , USA, 2025. Association for Computing Machinery. [34] Zheyuan Liu, Xiaoxin He, Yijun Tian, and Nitesh V . Chawla. Can we soft prompt llms for graph learning tasks? In Tat-Seng Chua, Chong-Wah Ngo, Roy Ka-Wei Lee, Ravi Kumar, and Hady W. Lauw, editors, Companion Proceedings of the ACM on Web Conference 2024, WWW 2024, Singapore, Singapore, May 13-17, 2024 , pages 481–484. ACM, 2024. 11 [35] David Mazieres. The stellar consensus protocol: A federated model for internet-level consensus. Stellar Development Foundation , 2015. [36] Kai Mei, Xi Zhu, Wujiang Xu, Wenyue Hua, Mingyu Jin, Zelong Li, Shuyuan Xu, Ruosong Ye, Yingqiang Ge, and Yongfeng Zhang. Aios: Llm agent operating system. arXiv:2403.16971 , 2024. [37] Chuizheng Meng, Sirisha Rambhatla, and Yan Liu. Cross-node federated graph neural network for spatio-temporal data modeling. In Proceedings of the 27th ACM SIGKDD conference on knowledge discovery & data mining , pages 1202–1211, 2021. [38] Ibomoiye Domor Mienye and Yanxia Sun. A survey of ensemble learning: Concepts, algorithms, applications, and prospects. Ieee Access , 10:99129–99149, 2022. [39] Du Mingxiao, Ma Xiaofeng, Zhang Zhe, Wang Xiangwei, and Chen Qijun. A review on consensus algorithm of blockchain. In 2017 IEEE international conference on systems, man, and cybernetics (SMC) , pages 2567–2572. IEEE, 2017. [40] Fatma Mlika, Wafa Karoui, and Lotfi Ben Romdhane. Blockchain solutions for trustworthy decentralization in social networks. Computer Networks , page 110336, 2024. [41] MyShell AI. Myshell documentation, 2025. [42] Satoshi Nakamoto. Bitcoin: A peer-to-peer electronic cash system. Satoshi Nakamoto , 2008. [43] Aldo Pareja, Giacomo Domeniconi, Jie Chen, Tengfei Ma, Toyotaro Suzumura, Hiroki Kaneza- shi, Tim Kaler, Tao Schardl, and Charles Leiserson. Evolvegcn: Evolving graph convolutional networks for dynamic graphs. In Proceedings of the AAAI conference on artificial intelligence , volume 34, pages 5363–5370, 2020. [44] Farimah Poursafaei, Shenyang Huang, Kellin Pelrine, and Reihaneh Rabbany. Towards better evaluation for dynamic link prediction. Advances in Neural Information Processing
|
https://arxiv.org/abs/2505.21388v1
|
Systems , 35:32928–32941, 2022. [45] Tao Qi, Fangzhao Wu, Chuhan Wu, Lingjuan Lyu, Tong Xu, Hao Liao, Zhongliang Yang, Yongfeng Huang, and Xing Xie. Fairvfl: A fair vertical federated learning framework with contrastive adversarial learning. Advances in neural information processing systems , 35:7852– 7865, 2022. [46] Zhen Qin, Xueqiang Yan, Mengchu Zhou, and Shuiguang Deng. Blockdfl: A blockchain-based fully decentralized peer-to-peer federated learning framework. In Proceedings of the ACM on Web Conference 2024 , pages 2914–2925, 2024. [47] Longfei Qiu, Yoonseung Kim, Ji-Yong Shin, Jieung Kim, Wolf Honoré, and Zhong Shao. Lido: Linearizable byzantine distributed objects with refinement-based liveness proofs. Proceedings of the ACM on Programming Languages , 8(PLDI):1140–1164, 2024. [48] Ladislav Rampášek, Michael Galkin, Vijay Prakash Dwivedi, Anh Tuan Luu, Guy Wolf, and Dominique Beaini. Recipe for a general, powerful, scalable graph transformer. Advances in Neural Information Processing Systems , 35:14501–14515, 2022. [49] Xubin Ren, Jiabin Tang, Dawei Yin, Nitesh Chawla, and Chao Huang. A survey of large language models for graphs. In Proceedings of the 30th ACM SIGKDD Conference on Knowl- edge Discovery and Data Mining , KDD ’24, page 6616–6626, New York, NY , USA, 2024. Association for Computing Machinery. [50] Yuji Roh, Qingyun Liu, Huan Gui, Zhe Yuan, Yujin Tang, Steven Euijong Whang, Liang Liu, Shuchao Bi, Lichan Hong, Ed H Chi, et al. Levi: generalizable fine-tuning via layer-wise ensemble of different views. arXiv preprint arXiv:2402.04644 , 2024. [51] Frank Rosenblatt. The perceptron: a probabilistic model for information storage and organiza- tion in the brain. Psychological review , 65(6):386, 1958. 12 [52] Aravind Sankar, Yanhong Wu, Liang Gou, Wei Zhang, and Hao Yang. Dysat: Deep neural representation learning on dynamic graphs via self-attention networks. In Proceedings of the 13th International Conference on Web Search and Data Mining , pages 519–527, 2020. [53] Jiabin Tang, Yuhao Yang, Wei Wei, Lei Shi, Lixin Su, Suqi Cheng, Dawei Yin, and Chao Huang. Graphgpt: Graph instruction tuning for large language models. In Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval , SIGIR ’24, page 491–500, New York, NY , USA, 2024. Association for Computing Machinery. [54] The MystenLabs Team. The sui smart contracts platform. https://github.com/ MystenLabs/sui/blob/main/doc/paper/sui.pdf , 2023. [55] Petar Veli ˇckovi ´c, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. arXiv preprint arXiv:1710.10903 , 2017. [56] Shicheng Wan, Hong Lin, Wensheng Gan, Jiahui Chen, and Philip S Yu. Web3: The next internet revolution. IEEE Internet of Things Journal , 11(21):34811–34825, 2024. [57] Ruijie Wang, Jingyuan Huang, Yutong Zhang, Jinyang Li, Yufeng Wang, Wanyu Zhao, Shengzhong Liu, Charith Mendis, and Tarek Abdelzaher. Tgonline: Enhancing temporal graph learning with adaptive online meta-learning. In Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval , pages 1659–1669, 2024. [58] Ruijie Wang, Zheng Li, Danqing Zhang, Qingyu Yin, Tong Zhao, Bing Yin, and Tarek Abdelza- her. Rete: Retrieval-enhanced temporal event forecasting on unified query product evolutionary graph. In Proceedings of the ACM Web Conference 2022 , pages 462–472, 2022. [59] Yanbang Wang, Yen-Yu Chang, Yunyu Liu, Jure
|
https://arxiv.org/abs/2505.21388v1
|
Leskovec, and Pan Li. Inductive representation learning in temporal networks via causal anonymous walks. arXiv preprint arXiv:2101.05974 , 2021. [60] Wei Wei, Xubin Ren, Jiabin Tang, Qinyong Wang, Lixin Su, Suqi Cheng, Junfeng Wang, Dawei Yin, and Chao Huang. Llmrec: Large language models with graph augmentation for recommendation. In Proceedings of the 17th ACM International Conference on Web Search and Data Mining , WSDM ’24, page 806–815, New York, NY , USA, 2024. Association for Computing Machinery. [61] Liangliang Wen, Jiye Liang, Kaixuan Yao, and Zhiqiang Wang. Black-box adversarial attack on graph neural networks with node voting mechanism. IEEE Transactions on Knowledge and Data Engineering , 2024. [62] Brian Wohlhieter. Bitclout: Decentralized social media or nfts for celebrities?, 2021. [63] Gavin Wood et al. Ethereum: A secure decentralised generalised transaction ledger. Ethereum project yellow paper , 151(2014):1–32, 2014. [64] Felix Wu, Amauri Souza, Tianyi Zhang, Christopher Fifty, Tao Yu, and Kilian Weinberger. Simplifying graph convolutional networks. In International conference on machine learning , pages 6861–6871. Pmlr, 2019. [65] Qitian Wu, Wentao Zhao, Zenan Li, David P. Wipf, and Junchi Yan. Nodeformer: A scalable graph structure learning transformer for node classification. In Sanmi Koyejo, S. Mohamed, A. Agarwal, Danielle Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022 , 2022. [66] Qitian Wu, Wentao Zhao, Chenxiao Yang, Hengrui Zhang, Fan Nie, Haitian Jiang, Yatao Bian, and Junchi Yan. Simplifying and empowering transformers for large-graph representations. In Alice Oh, Tristan Naumann, Amir Globerson, Kate Saenko, Moritz Hardt, and Sergey Levine, editors, Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023 , 2023. 13 [67] Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and Philip S Yu. A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems , 32(1):4–24, 2020. [68] Lianghao Xia, Chao Huang, Chunzhen Huang, Kangyi Lin, Tao Yu, and Ben Kao. Automated self-supervised learning for recommendation. In Proceedings of the ACM web conference 2023 , pages 992–1002, 2023. [69] Da Xu, Chuanwei Ruan, Evren Korpeoglu, Sushant Kumar, and Kannan Achan. Inductive representation learning on temporal graphs. arXiv preprint arXiv:2002.07962 , 2020. [70] Dongliang Xu, Wei Shi, Wensheng Zhai, and Zhihong Tian. Multi-candidate voting model based on blockchain. IEEE/CAA Journal of Automatica Sinica , 8(12):1891–1900, 2021. [71] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826 , 2018. [72] Wujiang Xu, Shaoshuai Li, Mingming Ha, Xiaobo Guo, Qiongxu Ma, Xiaolei Liu, Linxun Chen, and Zhenfeng Zhu. Neural node matching for multi-target cross domain recommendation. In2023 IEEE 39th International Conference on Data Engineering (ICDE) , pages 2154–2166. IEEE, 2023. [73] Wujiang Xu, Zujie Liang, Jiaojiao Han, Xuying Ning, Wenfang Lin, Linxun Chen, Feng Wei, and Yongfeng Zhang. Slmrec: empowering small language models for sequential recommenda- tion. arXiv e-prints , pages arXiv–2405, 2024. [74] Wujiang Xu, Kai
|
https://arxiv.org/abs/2505.21388v1
|
Mei, Hang Gao, Juntao Tan, Zujie Liang, and Yongfeng Zhang. A-mem: Agentic memory for llm agents. arXiv preprint arXiv:2502.12110 , 2025. [75] Wujiang Xu, Qitian Wu, Runzhong Wang, Mingming Ha, Qiongxu Ma, Linxun Chen, Bing Han, and Junchi Yan. Rethinking cross-domain sequential recommendation under open-world assumptions. In Proceedings of the ACM Web Conference 2024 , pages 3173–3184, 2024. [76] Anatoly Yakovenko. Solana: A new architecture for a high performance blockchain v0. 8.13. Whitepaper , 2018. [77] Cheng Yang, Chunchen Wang, Yuanfu Lu, Xumeng Gong, Chuan Shi, Wei Wang, and Xu Zhang. Few-shot link prediction in dynamic networks. In WSDM ’22 , 2022. [78] Liangwei Yang, Shengjie Wang, Yunzhe Tao, Jiankai Sun, Xiaolong Liu, Philip S Yu, and Taiqing Wang. Dgrec: Graph neural network for recommendation with diversified embedding generation. In Proceedings of the sixteenth ACM international conference on web search and data mining , pages 661–669, 2023. [79] Ruimeng Ye, Yang Xiao, and Bo Hui. A pilot study of weak-to-strong generalization in safety, toxicity, and legal reasoning. In ICLR 2025 Workshop on Bidirectional Human-AI Alignment , 2025. [80] Ruosong Ye, Caiqi Zhang, Runhui Wang, Shuyuan Xu, and Yongfeng Zhang. Language is all a graph needs. arXiv preprint arXiv:2308.07134 , 2023. [81] Maofan Yin, Dahlia Malkhi, Michael K Reiter, Guy Golan Gueta, and Ittai Abraham. Hotstuff: Bft consensus with linearity and responsiveness. In Proceedings of the 2019 ACM symposium on principles of distributed computing , pages 347–356, 2019. [82] Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, and Tie-Yan Liu. Do transformers really perform badly for graph representation? Advances in neural information processing systems , 34:28877–28888, 2021. [83] Jiaxuan You, Tianyu Du, and Jure Leskovec. Roland: graph learning framework for dynamic graphs. In Proceedings of the 28th ACM SIGKDD conference on knowledge discovery and data mining , pages 2358–2366, 2022. 14 [84] Le Yu, Leilei Sun, Bowen Du, and Weifeng Lv. Towards better dynamic graph learning: New architecture and unified library. Advances in Neural Information Processing Systems , 36:67686–67700, 2023. [85] Shuo Yu, Yingbo Wang, Ruolin Li, Guchun Liu, Yanming Shen, Shaoxiong Ji, Bowen Li, Fengling Han, Xiuzhen Zhang, and Feng Xia. Graph2text or graph2token: A perspective of large language models for graph learning. arXiv preprint arXiv:2501.01124 , 2025. [86] Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B Tenenbaum, Tianmin Shu, and Chuang Gan. Building cooperative embodied agents modularly with large language models. arXiv preprint arXiv:2307.02485 , 2023. [87] Huanding Zhang, Tao Shen, Fei Wu, Mingyang Yin, Hongxia Yang, and Chao Wu. Federated graph learning–a position paper. arXiv preprint arXiv:2105.11099 , 2021. [88] Jiasheng Zhang, Jialin Chen, Menglin Yang, Aosong Feng, Shuang Liang, Jie Shao, and Rex Ying. Dtgb: A comprehensive benchmark for dynamic text-attributed graphs. In A. Globerson, L. Mackey, D. Belgrave, A. Fan, U. Paquet, J. Tomczak, and C. Zhang, editors, Advances in Neural Information Processing Systems , volume 37, pages 91405–91429. Curran Associates, Inc., 2024. [89] Wei Zhang, Hongcheng Guo, Jian Yang, Yi Zhang, Chaoran Yan, Zhoujin Tian, Hangyuan Ji, Zhoujun Li, Tongliang Li, Tieqiao Zheng, et al. mabc: multi-agent
|
https://arxiv.org/abs/2505.21388v1
|
blockchain-inspired collabo- ration for root cause analysis in micro-services architecture. arXiv preprint arXiv:2404.12135 , 2024. [90] Xin Zhang, Daochen Zha, and Qiaoyu Tan. E2gnn: Efficient graph neural network ensembles for semi-supervised classification. arXiv preprint arXiv:2405.03401 , 2024. [91] Yang Zhang, Fuli Feng, Chenxu Wang, Xiangnan He, Meng Wang, Yan Li, and Yongdong Zhang. How to retrain recommender system? a sequential meta-learning method. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval , SIGIR ’20, page 1479–1488, 2020. [92] Xi Zhu, Haochen Xue, Ziwei Zhao, Wujiang Xu, Jingyuan Huang, Minghao Guo, Qifan Wang, Kaixiong Zhou, and Yongfeng Zhang. Llm as gnn: Graph vocabulary learning for text-attributed graph foundation models. arXiv preprint arXiv:2503.03313 , 2025. [93] Weiqin Zou, David Lo, Pavneet Singh Kochhar, Xuan-Bach Dinh Le, Xin Xia, Yang Feng, Zhenyu Chen, and Baowen Xu. Smart contract development: Challenges and opportunities. IEEE transactions on software engineering , 47(10):2084–2106, 2019. [94] Wenrui Zuo, Aravindh Raman, Raul J Mondragón, and Gareth Tyson. Set in stone: Analysis of an immutable web3 social media platform. In Proceedings of the ACM Web Conference 2023 , pages 1865–1874, 2023. 15 Contents 1 Introduction 1 2 Related Works 2 2.1 Graph Learning for Social Networks . . . . . . . . . . . . . . . . . . . . . . . . . 2 2.2 Blockchain Consensus Mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2.3 Ensemble Learning and Majority V oting . . . . . . . . . . . . . . . . . . . . . . . 3 3 Problem Definition 3 4 Our Framework 4 4.1 Framework Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 4.2 Personalized Backbone Algorithm Selection . . . . . . . . . . . . . . . . . . . . . 5 4.3 Decentralized Consensus V oting Scheme . . . . . . . . . . . . . . . . . . . . . . . 6 5 Experiments 7 5.1 Experimental Setups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 5.2 Evaluation Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 5.3 Impacts of the Personalized Algorithms . . . . . . . . . . . . . . . . . . . . . . . 7 5.4 Impacts of the Multiple Validators . . . . .
|
https://arxiv.org/abs/2505.21388v1
|
. . . . . . . . . . . . . . . . . . . . . 8 5.5 Efficiency Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 6 Conclusions 9 A Details for the Experiments 18 A.1 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 A.2 Baselines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 A.3 Implementation Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 A.4 Framework Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 A.5 Simple Rule-Based Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 A.6 Ganache Usage Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 A.7 Table of Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 B Operation Time Analysis of DeSocial 21 B.1 DeSocial Operation Time Breakdown . . . . . . . . . . . . . . . . . . . . . . . . 21 B.2 Run Time Analysis of Decentralized Methods . . . . . . . . . . . . . . . . . . . . 22 B.2.1 Both Modules Enabled . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 B.2.2 Personalized Algorithm Module Enabled Only . . . . . . . . . . . . . . . 23 B.2.3 Decentralized Consensus Module Enabled Only . . . . . . . . . . . . . . . 23 C Additional Studies 24 C.1 Sensitivity to Hyperparameters in Personalized Algorithm Module . . . . . . . . . 24 16 C.2 Enhancements of Decentralized Consensus . . . . . . .
|
https://arxiv.org/abs/2505.21388v1
|
. . . . . . . . . . . . . . 27 C.3 Consensus Analysis of Different Node Groups . . . . . . . . . . . . . . . . . . . . 28 D Broader Impacts 28 E Ethics Statement 29 17 A Details for the Experiments A.1 Datasets Table 4 shows the number of nodes and edges, network density and network types, in each dataset. For each dataset, we divide the temporal graph into a sequence of discrete time slices by uniformly partitioning all edges according to their timestamps, ensuring that each time slice contains approxi- mately the same number of interactions while preserving the overall temporal order of events. We divide the 40 temporal slices into training, validation, and testing periods following a ratio of 25:5:10. Table 4: Dataset Statistics Dataset #Users #Interactions Density Network UCI 1,899 59,835 0.016592 Web 2.0 Enron 42,711 797,907 0.000437 Web 2.0 GDELT 6,786 1,339,245 0.029083 Web 2.0 Memo-Tx 10,907 994,131 0.008357 Web 3.0 Memo-Tx [94] is a decentralized transaction network with timestamps on memo.cash, which is a Web3.0 social networking platform built directly on the Bitcoin Cash blockchain. Every user action on the platform, such as posting content, replying to others, liking posts, following users, trading cryptocurrencies, or updating profile information, is implemented as an on-chain transaction. The transaction network of memo.cash is constructed using timestamped transaction data from April 6, 2018 to November 30, 2021. Transactions are organized into blocks, each block containing multiple inputs and outputs. For transactions involving the same cryptocurrency within a block, edges are established from each input node to each output node, indicating the currency flows between nodes. UCI [44] is a spatio-temporal network map of student interactions within the University of California, Irvine, showing communication relationships between students. These interactions are timestamped. Enron [88] is a temporal graph dataset constructed from email communications between employees of the Enron corporation from 1999 to 2002. Each edge corresponds to a timestamped email between the Enron employees. The edges are arranged in chronological order according to the sending time. GDELT [88] is a temporal graph dataset that originates from Web2.0 media sources and captures global political and social interactions over time. Each node represents an entity (e.g., a person, country, or organization). Edges arranged in chronological order indicate interactions between entities at specific timestamps, reflecting when these entities were mentioned together or engaged in an event. A.2 Baselines MLP [51]: The multilayer perceptron only uses node attributes, ignoring the topology of the graph. It is a powerful feature-based basis that is computationally efficient as it does not rely on the structure. GCN [24]: The graph convolutional network performs graph spectral convolution by aggregating information about the immediate neighbors’ features. It efficiently captures local homophilic patterns. GAT [55]: The graph attention network introduces attention mechanism to assign adaptive weights to neighbors during aggregation. This is particularly effective on heterogeneous or noisy connected graphs, but has a high computational cost and time-consuming because of the attention computation. GraphSAGE [13]: GraphSAGE is a
|
https://arxiv.org/abs/2505.21388v1
|
framework to learn aggregation functions in sampled neighbors, supporting the generalization to unseen nodes, suitable for large and dynamic graphs, balancing performance and scalability. It computes faster by reducing the number of neighbors for aggregation. SGC [64]: The simplified graph convolution network removes nonlinearity and collapses multiple layers of graph convolution network into a single linear transformation with pre-computed propagation. It improves speed significantly while maintaining fairly high performances on homogeneous graphs. 18 A.3 Implementation Details Hyper-Parameters in Backbone Selection . We run the heuristic backbone selection by setting the hyperparameters as follows: •Time decay coefficient α∈ {0,−0.01,−0.1,−1}explores increasing levels of decay. α= 0 means no time decay is applied, all past neighbor interactions are treated equally. α <0stands for an exponential or linear time discounting, where older interactions become less influential. The more negative αis, the faster the decay. •Number of sampled neighborhood pairs γ∈ {250,500,750,1000,1250}determines the number of positive–negative neighbor edge pairs sampled for each node uduring the heuristic backbone evaluation. Larger γcan let the backbone model test multiple negative samples from the same neighbor to make the selection more reliable. •Backbone pool set F ∈2FFullis a chosen subset of the full backbone pool FFull (i.e., MLP, GCN, GAT, GraphSAGE, and SGC). In real-life software use, users tend not to pay attention to all the backbone, and removing weaker or redundant models might improve selection precision. We can therefore study the variance of the performance influenced by different F. Hyper-Parameters in Consensus Mechanism . To study the effectiveness of decentralized modeling, we compare with MLP, GCN, GAT, GraphSAGE, and SGC with number of validators ranged in {3,5,7}. Each expert is trained independently on the same training data but with different random seeds to encourage diversity, they also pick different negative samples by setting different random seeds. Besides, we perform hyperparameter tuning over the following ranges: learning rate in {1e−1,5e−2,1e−2,5e−3,1e−3,5e−4,1e−4,5e−5,1e−5,5e−6,1e−6}and dropout in{0.3,0.5,0.7}. For each model on each dataset, we report the performance of all metrics on the test period using the hyperparameter setting that achieves the highest value of each metric on the validation period. Baseline Implementation . For GCN, GAT, and GraphSAGE, we implemented the models by applying two GCNConv ,GATv2Conv andSAGEConv inPyG2respectively, and we used dot products for decoding. In GAT, we used 4 head attention instead of 8 to save computation memory and time, and added BatchNorm. For implementing SGC, we used SGConv for encoding and multiperception layers for decoding. The graph training algorithms are implemented based on the open-source DTGB benchmark [88]. Execution Environment . Our experiments are conducted on a server equipped with an Intel Xeon Gold 6226R CPU and eight NVIDIA A100 GPU. We employ a local ETH development environment powered by Ganache v7.9.2, with @ganache/cli and@ganache/core in version 0.10.2, to simulate blockchain behavior and smart contract interactions. The smart contract is implemented in Solidity and compiled by the truffle suite. Code Availability. We have included the implementation code in our supplementary zip file. Although we originally planned to provide an anonymous Github repository, we decided that the supplementary materials already suffice for ensuring reproducibility.
|
https://arxiv.org/abs/2505.21388v1
|
A.4 Framework Training Training Strategies . At each time step t, the model is trained by fully retrained strategy, takingSt−1 τ=0Gτas the training dataset. The model is validated on Gt, and tested on Gt+1, simulating an inductive setting where future edges must be predicted without directly observing them during training. We trained the dataset by 100 epochs at most, and also apply early stopping strategy, with 20 epochs as patience. Model Updates . We restrict model updates to only the nodes selected as validators to test Gt+1. Originally, DeSocial is designed to update the representations of all nodes using Gt. However, this process is computationally intensive for single server simulation, and in practice, only a small subset of nodes are involved in validation. Therefore, to improve efficiency, In a practical blockchain setting, we envision that DeSocial would be executed in a decentralized manner, where multiple 2https://pytorch-geometric.readthedocs.io/en/latest/index.html 19 machines perform prediction tasks independently and concurrently, thereby substantially reducing computational latency. A.5 Simple Rule-Based Selection To demonstrate the effectiveness of DeSocial ’s personalized algorithm selection, we conducted an additional ablation study. In this experiment, each user simply selects a personalized backbone algorithm based solely on two trivial local subgraph features, specifically: • Node Degree ( Deg(u)): Defined as the number of 1-hop neighbors of node uin the graph. •Clustering Coefficient ( cu): Measures the density of the ego-network of node u, computed as the ratio between the number of edges among its neighbors and the maximum possible number of such edges. Formally, given a graph G= (V,E), letN(u)be the 1-hop neighbor set of node u, and|N(u)|= Deg(u). The clustering coefficient cuis calculated as: cu=( 0, ifDeg(u)≤1 2·|{(v,w)∈E|v,w∈N(u)}| Deg(u)·(Deg(u)−1),otherwise(8) The simple rule-based backbone selection rules are illustrated in the Algorithm 3. Algorithm 3: : Rule-based Backbone Selection Input: Node degree Deg(u), clustering coefficient cu, backbone pool F Output: Selected model Fufor node u ifDeg(u)≥6andSGC∈ F then Fu←SGC; end else if cu<0.2andDeg(u)≥4andSAGE∈ F then Fu←SAGE; end else if Deg(u)≤2andMLP∈ F then Fu←MLP; end else if cu≥0.4andGCN∈ F then Fu←GCN; end else Fu←the last model in F; end return Fu A.6 Ganache Usage Introduction In our framework, decentralized social prediction is based on the deployment of smart contracts and validator interactions running on the blockchain infrastructure. In this paper, we focus on the design and algorithmic implications of decentralization, and here we provide additional background on the use of Ganache, the local Ethereum (ETH) development environment we used in our experiments. Ganache is a widely-adopted in-memory ETH simulator designed for testing and development that allows developers to run a private blockchain instance with controllable parameters, including account creation, transaction latency, and miner behavior. Unlike ETH public testnet or main network, which can cause real-world network latency and gas costs, Ganache executes smart contracts quickly and deterministically without causing network instability. In our deployment, Ganache is used to simulate validator selections, user-initiated prediction requests, and smart contract-based voting under real blockchain interfaces. This option allows for a comprehen- sive evaluation on DeSocial of decentralized workflows, including contract deployment, transaction 20 submission, and vote collection, without having to pay
|
https://arxiv.org/abs/2505.21388v1
|
the high implementation and economical costs required for deployment to a public ETH network. A.7 Table of Notation Table 5: List of main notations used in this paper. Symbol Description Gt= (Vt,Et) Temporal graph at time t, with node set Vtand edge set Et F Pool of backbone models. FFull The set of backbones used in this paper, i.e., {MLP,GCN,GAT,SAGE ,SGC} fΘu Graph learning model used by node uwith parameters Θu Φp,q,t Validator committee for verifying link (p, q)at time t V ote(ϕ, p, q, t ) Binary decision by validator ϕon link (p, q)at time t V er(Φp,q,t, p, q, t ) Aggregated verification result by majority voting over Φp,q,t zu Embedding vector of node u σ(·) Sigmoid activation function Neg(ϕ, p, q, t ) Set of negative samples selected by validator ϕ∈Φp,q,t Γ Set of historical neighbor pairs (vp, vn)for model selection Πu,v Temporal weight of edge (u, v)based on its emergence time DtAll graph data up to time t, i.e.,∪t τ=0Gτ Nt(u) 1-hop neighbor set of uinGt B Operation Time Analysis of DeSocial B.1 DeSocial Operation Time Breakdown Step 1: User 𝑝𝑖 Submits Requests for Predicting 𝑞1𝑝𝑖,𝑞2𝑝𝑖,…Step 2: Blockchain Collects Requests, Forms 𝓖𝑡+1, and Selects Validator Communities 𝚽Step 4: User 𝑝𝑖 Creates Historical Neighborhood Sampling TaskStep 5: User 𝑝𝑖 Requests Examine Different AlgorithmsStep 8: User 𝑝𝑖 Selects Personalized Algorithm 𝓕𝑝𝑖 Step 9: For Each 𝑝𝑖, Validators 𝜙 from Its Validator Community Submit Their Votes of 𝑝𝑖’s Requests via Blockchain, Blockchain Aggregates the Results 𝑝𝑖 𝑞𝑖 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? Step 6: One of the Nodes in Each Validator Community with 𝓕𝑗 Does the Task Step 7: The Nodes Return the Task Result to 𝑝𝑖 via Blockchain 𝑝𝑖 𝑞𝑖 Step 3: Each Validator 𝜙 Trains Their Own Model 𝑓Θ𝜙 DeSocial Operation Time Breakdown Step 10: Broadcast: All the Users in the Blockchain Network Updates Its Graph Data 𝓓𝑡 Through Retrieving the Testing Data via Blockchain Figure 6: Illustration of the DeSocial operation pipeline for a single prediction period. Each row represents one user’s end-to-end process, from request submission to obtain decision. Different roles of the users are depicted using distinct person icons. In this section, we provide a detailed breakdown of the runtime operations of DeSocial , deployed on Ganache, a local development chain of ETH, to better understand its step-by-step execution process. As shown in Figure 6, the full pipeline of DeSocial Full is analyzed from the perspective of an individual user. Below, we enumerate each operational step to clarify the sequence of interactions between the requesting users, validators, and the blockchain infrastructure: •Step 1: Userpisubmits requests to predict social links with target nodes q1pi, q2pi, . . .. •Step 2: The blockchain collects all user requests, constructs Gt+1, and assigns a validator commu- nityΦaccording to each backbone model Fi∈ F through the smart contract. 21 •Step 3: Each validator ϕ∈ Vt valindependently trains their own graph learning model fθϕbased on the data Dtstored in their own local memory. Dtdescribes
|
https://arxiv.org/abs/2505.21388v1
|
the union of the historical snapshots G0,G1, ...,Gt, and each node stored one copy of Dt. •Step 4: Userpicreates a personalized neighborhood sampling task based on local graph structure. •Step 5: Validator nodes retrieve pi’s request through the blockchain smart contract, and evaluate it using different available algorithms Fj. •Step 6: One selected validator in each community executes the sampling task using algorithm Fj and returns results to the blockchain through the smart contract. •Step 7: The result of each algorithm trial is returned to pithrough the blockchain for evaluation. •Step 8: Userpiselects a preferred model Fpibased on the returned results. •Step 9: Validators in ΦrunFpionpi’s request and submit their binary votes to the blockchain. The blockchain aggregates the votes to form the final prediction Gt+1 pred. Both the voting and aggregating operations are defined by the smart contract. •Step 10: The period ends, all the nodes in the network copy the social network data Gt+1and merge it to their graph database via the blockchain by the smart contract. We observe that as the number of requests to the ETH Ganache local blockchain increases, the overall runtime becomes slower. To better understand the computational overhead introduced by blockchain environment, we analyze the on-chain runtime under three configurations: (1) enabling both modules (Figure 6), (2) enabling only the personalized algorithm selection module (Figure 7), and (3) enabling only the multi-validator decentralized consensus module (Figure 8). We are going to give an analysis of the run time on the UCI and Memo-Tx dataset, representing the small graph and the large graph. Step 1: User 𝑝𝑖 Submits Requests for Predicting 𝑞1𝑝𝑖,𝑞2𝑝𝑖,…Step 2: Blockchain Collects Requests, Forms 𝓖𝑡+1, and Selects Validator For Each Backbone 𝓕Step 4: User 𝑝𝑖 Creates Historical Neighborhood Sampling TaskStep 5: User 𝑝𝑖 Requests Examine Different AlgorithmsStep 8: User 𝑝𝑖 Selects Personalized Algorithm 𝓕𝑝𝑖 Step 9: For Each 𝑝𝑖, Validators 𝜙 from Its Validation Community Submit Their Decisions, and Blockchain Returns the Results 𝑝𝑖 𝑞𝑖 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? Step 6: One of the Nodes in Each Validator Community with 𝓕𝑗 Does the Task Step 7: The Nodes Return the Task Result to 𝑝𝑖 via Blockchain 𝑝𝑖 𝑞𝑖 Step 3: Each Validator 𝜙 Trains Their Own Model 𝑓Θ𝜙 DeSocial Operation Time Breakdown (Personalized Algorithm Module Enabled Only) Step 10: Broadcast: All the Users in the Blockchain Network Updates Its Graph Data 𝓓𝑡 Through Retrieving the Testing Data via Blockchain Figure 7: DeSocial pipeline with only the personalized algorithm selection module enabled. The blockchain assigns validators to evaluate candidate algorithms, and users select the best model without executing consensus voting. B.2 Run Time Analysis of Decentralized Methods B.2.1 Both Modules Enabled This configuration is shown in Figure 6. Among the pipeline steps, Steps 1 and 8 cost negligible run time, as they involve only local Python instructions executed at a scale proportional to |Et| ∼104 and|Vt| ∼103, respectively. We now analyze the run time costs for these steps shown in Table 6. The runtime of
|
https://arxiv.org/abs/2505.21388v1
|
all steps is reported as an amortized average of all users sending link prediction requests at t.Steps 4, 5, 6, 7 and 10 have lower latency due to operations on user-neighborhood structures, which scale with |Vt| ∼103. Although Step 10 yields calls to the smart contract via the blockchain with the number of time scaling with |V| ∼ 104, bringing less overhead because they don’t add blocks. The primary contributions to the data merging is the data copy in Python. The primary contributors to runtime overhead are Steps 2, 3, and 9. Step 3, the graph model training step, dominates the computation time, as training over Dtis inherently sequential within each model and 22 cannot be parallelized. Steps 2 and 9 involve frequent blockchain interactions proportional to the number of prediction requests, again scaling with |Et|. Table 6: Run time (s) for each major step in DeSocial Full under ETH Ganache simulation. Step Operation Scale Major Resource Runtime (UCI) Runtime (Memo-Tx) 2 Validator Community Formation |Et| Smart Contract Execution 0.1193 1.414 3 Graph Model Training DtModel Training (GPU) 54.48 2011 4 Create Backbone Evaluation Task |Vt| Python Random Function 0.0022 0.0005 5 Request on Backbone Evaluation |Vt| Smart Contract Execution 0.0160 0.0177 6 Evaluation Execution |Vt| Model Inference 0.0002 0.0002 7 Return Evaluation Results |Vt| Smart Contract Execution 0.0161 0.0217 9 V ote and Aggregation |Et| Smart Contract Execution 0.6233 9.081 10 Merge Graph Data |V| Python Data Copy 0.0249 0.2429 B.2.2 Personalized Algorithm Module Enabled Only As shown in Figure 7, when only the personalized algorithm module is enabled, DeSocial PA still involves multiple rounds of blockchain interaction, primarily for coordination and information exchange between users and validators. Most of the pipeline remain unchanged, except the three aspects described as follows: • In Step 2, for each validation community, the blockchain only selects one validator. • In Step 3, for each validation community, only one validator is training the graph data Dt. • In Step 9, as there is only one validator, it can make the decision directly without consensus. Table 7: Run time (s) for each major step in DeSocial PAunder ETH Ganache simulation. Step Operation Scale Major Resource Runtime (UCI) Runtime (Memo-Tx) 2 Validator Community Formation |Et| Smart Contract Execution 0.1419 1.360 3 Graph Model Training DtModel Training (GPU) 54.48 2011 4 Create Backbone Evaluation Task |Vt| Python Random Function 0.0017 0.0005 5 Request on Backbone Evaluation |Vt| Smart Contract Execution 0.0142 0.0169 6 Evaluation Execution |Vt| Model Inference 0.0002 0.0002 7 Return Evaluation Results |Vt| Smart Contract Execution 0.0182 0.0209 9 Submit Decisions |Et| Smart Contract Execution 0.2396 3.220 10 Merge Graph Data |V| Python Data Copy 0.0249 0.2429 After removing the operation of forming a five-validator committee, the number of blockchain interactions in Step 9 decreases, as each link prediction now requires only a single vote submission instead of five. However, the run-time reduction is not strictly linear, since in our implementation, the smart contract still invokes the aggregation function to compare the number of True and False votes. Even with only one vote, this consensus logic
|
https://arxiv.org/abs/2505.21388v1
|
introduces a small but non-negligible overhead. It is important to note that when a user selects a slower algorithm that yields better performance, the runtime of Step 3 should be determined by the slowest backbone in F, because each validator independently trains their own model in parallel, and the overall execution must wait for the slowest backbone to complete. Here, GCN and GraphSAGE are the slowest backbone for DeSocial PAon UCI and Memo-Tx, respectively. B.2.3 Decentralized Consensus Module Enabled Only Table 8: Run time (s) for each major step in DeSocial V ote under ETH Ganache simulation. Step Operation Scale Major Resource Runtime (UCI) Runtime (Memo-Tx) 2 Validator Selection |Et| Smart Contract Execution 0.0855 1.130 3 Graph Model Training DtModel Training (GPU) 46.43 2011 4 V ote and Aggregation |Et| Smart Contract Execution 0.4538 7.460 5 Merge Graph Data |V| Python Data Copy 0.0249 0.2429 When only the decentralized consensus module is enabled, the DeSocial V ote framework skips the personalized algorithm selection phase and instead applies a fixed, prespecified backbone for all users, as depicted in Figure 8. In this setup, validators train their own models, but unlike DeSocial Full, no personalized selection process takes place. The models trained are simply evaluated for their effectiveness in the decentralized setting. After training, the validators submit their votes to the blockchain, where consensus is reached through majority voting. 23 We analyze the run-time of Steps 2 to 5 in Table 8. After reducing the overhead of blockchain requests of personalized algorithm selection, i.e., forming multiple validation communities and the backbone evaluation tasks, the run time of selecting validators (Step 2), voting and aggregating the decisions (Step 4), and broadcasting graph data (Step 5) is reduced. Notably, the error of the blockchain operations in all three configurations is mainly due to CPU occupancy. Step 3 remained unchanged as centralized backbone because we don’t need to run other backbones. Step 1: User 𝑝𝑖 Submits Requests for Predicting 𝑞1𝑝𝑖,𝑞2𝑝𝑖,…Step 2: Blockchain Collects Requests, Forms 𝓖𝑡+1, and Selects a Validator Communities 𝚽Step 4: For Each 𝑝𝑖, Validators 𝜙 from the Validator Community Submit Their Votes of 𝑝𝑖’s Requests via Blockchain, Blockchain Aggregates the Results 𝑝𝑖 𝑞𝑖 ? ? ? ? ? Step 3: Each Validator 𝜙 in the Validation Community Trains Their Own Model 𝑓Θ𝜙DeSocial Operation Time Breakdown (Decentralized Consensus Module Enabled Only) 𝑝𝑖 𝑞𝑖 Step 5: Broadcast: All the Users in the Blockchain Network Updates Its Graph Data 𝓓𝑡 Through Retrieving the Testing Data via Blockchain Figure 8: DeSocial pipeline with only the decentralized consensus module enabled. Users do not select personalized algorithms, and validators independently train models and the blockchain finalizes predictions via majority voting. C Additional Studies C.1 Sensitivity to Hyperparameters in Personalized Algorithm Module We conduct a sensitivity analysis on the personalized algorithm module by varying three key parame- ters: the number of the time decay coefficient α, sampled neighborhood pairs γ, and the backbone selection pool F. The description of these hyperparameters are in Appendix A.3. Table 9: Impact of time decay coefficient αon accuracy across datasets. Each block reports the means
|
https://arxiv.org/abs/2505.21388v1
|
and the standard deviations (%). Bold denotes the highest value per dataset at three evaluation metrics. All other parameters are fixed to their optimal values. Dataset α Acc@2 Acc@3 Acc@5 UCI0 72.84 ± 0.27 62.38 ± 0.32 51.00 ± 0.33 -0.01 73.02 ± 0.28 62.74 ± 0.32 51.37 ± 0.38 -0.1 73.35 ± 0.28 62.80 ± 0.30 51.34 ± 0.32 -1 72.35 ± 0.26 61.60 ± 0.31 49.94 ± 0.31 Memo-Tx0 83.96 ± 0.13 77.18 ± 0.14 69.31 ± 0.15 -0.01 83.45 ± 0.13 76.72 ± 0.14 68.97 ± 0.14 -0.1 83.37 ± 0.13 76.71 ± 0.14 69.07 ± 0.14 -1 83.12 ± 0.11 76.60 ± 0.13 68.97 ± 0.13 Enron0 90.05 ± 0.09 85.94 ± 0.11 81.13 ± 0.13 -0.01 90.02 ± 0.11 85.86 ± 0.13 80.92 ± 0.16 -0.1 90.08 ± 0.11 85.96 ± 0.13 81.02 ± 0.17 -1 89.60 ± 0.11 85.40 ± 0.13 80.39 ± 0.16 GDELT0 95.56 ± 0.02 92.44 ± 0.02 87.55 ± 0.04 -0.01 95.57 ± 0.02 92.44 ± 0.02 87.58 ± 0.04 -0.1 95.55 ± 0.02 92.40 ± 0.02 87.53 ± 0.04 -1 95.48 ± 0.02 92.30 ± 0.02 87.36 ± 0.03 24 Sensitivity to α. As shown in Table 9, we observe that the choice of the time decay coefficient αaffects the performance of the personalized algorithm selection module. Generally, across α∈ {0,−0.01,−0.1,−1}, the performance tends to peak at a mild negative value and degrades as |α| increases. For the Web 3.0 dataset Memo-Tx, as Web 3.0 transaction network lack strong temporal patterns due to the one-off, irregular, and non-periodic transactions, the temporal characteristics is weaker than the Web 2.0 dataset, thus applying α= 0that treats the historical interactions equally, can acheve the best performance. This trend reflects that how much the recent interactions should be prioritized when constructing the local subgraph for algorithm selection, according to the graph’s temporal characteristics. Table 10: Effect of neighbor sample size γon personalized algorithm selection performance (%). Each block reports the means and the standard deviations. Best values per dataset are in bold. All other parameters are fixed to their optimal values. Dataset γ Acc@2 Acc@3 Acc@5 UCI250 73.15 ± 0.28 62.59 ± 0.29 51.02 ± 0.33 500 72.76 ± 0.24 62.33 ± 0.29 50.83 ± 0.30 750 73.35 ± 0.28 62.80 ± 0.30 51.34 ± 0.32 1000 72.77 ± 0.29 62.26 ± 0.34 50.81 ± 0.37 1250 72.44 ± 0.28 62.08 ± 0.31 50.74 ± 0.33 Memo-Tx250 83.96 ± 0.13 77.18 ± 0.14 69.31 ± 0.15 500 83.81 ± 0.13 77.19 ± 0.14 69.52 ± 0.14 750 83.82 ± 0.12 77.03 ± 0.14 69.21 ± 0.15 1000 83.91 ± 0.12 77.28 ± 0.12 69.57 ± 0.13 1250 83.76 ± 0.14 76.97 ± 0.15 69.21 ± 0.17 Enron250 89.63 ± 0.09 85.43 ± 0.12 80.40 ± 0.15 500 89.80 ± 0.09 85.65 ± 0.10 80.69 ± 0.12 750 89.98 ± 0.12 85.85 ± 0.14 80.88 ± 0.16 1000 89.88 ± 0.10 85.75 ± 0.12 80.87 ± 0.15 1250 90.08 ± 0.11 85.96 ± 0.13 81.02 ± 0.17 GDELT250 95.54 ± 0.02 92.41 ± 0.03 87.54 ±
|
https://arxiv.org/abs/2505.21388v1
|
0.04 500 95.53 ± 0.02 92.40 ± 0.02 87.52 ± 0.04 750 95.54 ± 0.02 92.39 ± 0.02 87.51 ± 0.04 1000 95.57 ± 0.02 92.44 ± 0.02 87.58 ± 0.04 1250 95.55 ± 0.02 92.42 ± 0.02 87.55 ± 0.04 Sensitivity to γ. Table 10 shows the performance given γ∈ {250,500,750,1000,1250}. In DeSocial ,γcontrols the size of neighbor sample set to determine the most suitable algorithm. This directly affects the local structural context used for personalized algorithm selection. A small γlead to insufficient information, making the selector unstable, while a large γmay introduce outdated or noisy neighbors. Therefore, an appropriate γis needed for achieving the best performance. Sensitivity to F. Table 11 reports the Acc@2 of different backbone algorithm combinations F across four datasets. We analyze the impact of the backbone pool Ffrom several perspectives and highlight potential implications for blockchain-based decentralized social network prediction frameworks. Specifically, we address the following questions: •Q1: Which Fyield the best performance? •Q2: Do different datasets exhibit distinct preferences for specific backbones? •Q3: Does the "less is more" phenomenon occur, where increasing the number of backbones leads to degraded performance? ForQ1, from the aspect of Acc@2, the best Fin UCI is {MLP,GCN,GraphSAGE ,SGC} and{GAT,GraphSAGE ,SGC}, with an Acc@2 of 73.35%. The best Fin Memo-Tx is {GraphSAGE ,SGC}with an Acc@2 of 83.96%. The best Fin Enron is {GAT,GraphSAGE } with an Acc@2 of 90.08%. The best Fin GDELT is {GraphSAGE ,SGC}and{GAT,SGC}with an Acc@2 of 95.56%. We observe that GraphSAGE and SGC appears in all four top-performing combinations, making them core backbones across diverse graph types. For each dataset, the best F always includes the best centralized backbone model. This highlights the foundational role of core models in supporting the performance of decentralized social network algorithms. ForQ2, Yes. Although SGC and GraphSAGE are the core contributors to the performance, UCI may need to incorporate with GCN and MLP, while Enron and GDELT may need to incorporate with 25 GAT. It shows that different social network structures prefer different model combinations, which also demonstrates the importance of personalized algorithm selection in real-world deployments. ForQ3, Yes, the "less is more" phenomenon occurs in all datasets. In UCI, while selecting MLP and GraphSAGE can improve the centralized performance, adding GAT makes a worse performance. In Memo-Tx, while selecting GraphSAGE and SGC performs the best performance, adding GCN degrades the performance. In Enron, while selecting GAT and GraphSAGE achieves performs the best, adding one of the SGC, MLP, or GCN can hinder the improvement. In GDELT, when combining MLP and GAT can achieve 91.36% Acc@2, adding GCN can let Acc@2 drops back to 91.29%. Also, combining more backbones may perform worse than the centralized ones. Therefore, it is unrealistic to consider as many models as possible to combine, and increasing the number of models may not linearly improve performance, but may also introduce noise or redundancy. Our analysis verifies that a reasonable combination of models is more effective than simply piling up more models. For future work on blockchain-based decentralized social networks, more research is needed to explore the complementarities and
|
https://arxiv.org/abs/2505.21388v1
|
conflicts between backbone algorithms. Whether to adaptively choose combinations based on graph structure is the next question to be investigated. Table 11: Performance comparison over different F. Acc@2 (%) is reported with mean and standard deviation. "Y" indicates improvement over all centralized models in F, while "N" indicates not. F UCI Memo-Tx Enron GDELT Acc@2 ↑ Acc@2 ↑ Acc@2 ↑ Acc@2 ↑ MLP 66.38±0.34 - 73.61±0.11 - 81.48±0.08 - 91.28±0.02 - GCN 63.90±0.17 - 69.62±0.13 - 79.92±0.09 - 82.94±0.04 - GAT 61.15±0.26 - 72.51±0.26 - 85.52±0.12 - 88.29±0.28 - SAGE 69.00±0.40 - 82.85±0.15 - 90.27±0.06 - 93.16±0.02 - SGC 72.77±0.24 - 80.37±0.05 - 88.24±0.04 - 95.59±0.02 - MLP+GCN 66.51±0.23 Y 76.20±0.10 Y 83.73±0.09 Y 91.02±0.02 N MLP+GAT 65.73±0.24 Y 74.22±0.22 Y 86.57±0.18 Y 91.36±0.06 Y MLP+SAGE 70.04±0.39 Y 82.08±0.16 N 89.86±0.07 N 92.76±0.02 N MLP+SGC 73.19±0.26 Y 80.73±0.09 Y 88.29±0.07 Y 95.42±0.02 N GCN+GAT 64.12±0.27 Y 74.29±0.24 Y 85.55±0.17 Y 87.70±0.16 N GCN+SAGE 69.15±0.39 Y 82.74±0.16 N 89.88±0.06 N 92.84±0.02 N GCN+SGC 72.74±0.23 N 80.15±0.08 N 88.24±0.06 N 95.56±0.02 N GAT+SAGE 68.30±0.44 N 81.83±0.18 N 90.08±0.11 N 92.96±0.05 N GAT+SGC 72.45±0.23 N 78.42±0.17 N 88.68±0.07 Y 95.56±0.02 N SAGE+SGC 73.25±0.33 Y 83.96±0.13 Y 89.81±0.04 N 95.57±0.02 N MLP+GCN+GAT 65.92±0.26 N 75.45±0.19 Y 86.22±0.17 Y 91.29±0.05 Y MLP+GCN+SAGE 69.53±0.33 Y 82.04±0.12 N 89.66±0.08 N 92.61±0.02 N MLP+GCN+SGC 72.79±0.26 Y 80.60±0.08 Y 88.17±0.06 N 95.41±0.02 N MLP+GAT+SAGE 68.88±0.34 N 81.08±0.16 N 89.64±0.10 N 92.74±0.04 N MLP+GAT+SGC 72.87±0.27 Y 78.64±0.15 N 88.62±0.07 Y 95.40±0.02 N MLP+SAGE+SGC 73.25±0.29 Y 83.57±0.12 Y 89.79±0.04 N 95.40±0.02 N GCN+GAT+SAGE 68.48±0.37 N 81.73±0.15 N 89.89±0.09 N 92.82±0.04 N GCN+GAT+SGC 72.74±0.22 N 78.20±0.15 N 88.62±0.07 N 95.55±0.02 N GCN+SAGE+SGC 72.98±0.28 Y 83.79±0.13 Y 89.71±0.05 N 95.56±0.02 N GAT+SAGE+SGC 73.35±0.27 Y 83.11±0.15 Y 90.03±0.06 N 95.55±0.02 N MLP+GCN+GAT+SAGE 69.03±0.38 Y 81.20±0.14 N 89.57±0.11 N 92.64±0.02 N MLP+GCN+GAT+SGC 72.83±0.28 Y 78.52±0.17 N 88.70±0.07 Y 95.40±0.02 N MLP+GCN+SAGE+SGC 73.35±0.27 Y 83.27±0.13 Y 89.57±0.05 N 95.39±0.02 N MLP+GAT+SAGE+SGC 72.86±0.26 Y 82.45±0.16 N 89.77±0.06 N 95.40±0.02 N GCN+GAT+SAGE+SGC 72.77±0.30 N 82.63±0.14 N 89.86±0.07 N 95.54±0.02 N MLP+GCN+GAT+SAGE+SGC 73.07±0.28 Y 82.36±0.15 N 89.75±0.06 N 95.39±0.02 N 26 C.2 Enhancements of Decentralized Consensus To better understand decentralized multi-validator consensus mechanisms in DeSocial , we analyze the behavior of the proportions of the validator agreements and their relationship with accuracy in every testing period across UCI, Memo-Tx. For Enron and GDELT, we observe consistent results regarding the effectiveness of decentralized consensus. Figure 9 and Figure 10 illustrate the performances on three evaluation metrics and the proportions of different levels of agreements for UCI, respectively. Figure 11 and Figure 12 are for Memo-Tx. 30313233343536373839 Test Period5055606570758085Acc@2 (%)UCI 30313233343536373839 Test Period354045505560657075Acc@3 (%)UCI 30313233343536373839 Test Period20253035404550556065Acc@5 (%)UCI MLP GCNSAGE SGCDeSocial-MLP DeSocial-GCNDeSocial-SAGE DeSocial-SGC Figure 9: Testing performance at UCI dataset. The stars illustrate the centralized performances while the bars illustrate the decentralized performances in each testing period. MLP GCN SAGE SGC020406080100Proportion (%)Acc@2 MLP GCN SAGE SGC020406080100Proportion (%)Acc@3 MLP GCN SAGE SGC020406080100Proportion (%)Acc@5 0/5 Agree 1/5 Agree 2/5 Agree 3/5 Agree 4/5 Agree 5/5 Agree Figure 10: Distribution of validator agreement levels on the UCI dataset. Each set of ten bars corresponds to a
|
https://arxiv.org/abs/2505.21388v1
|
single model evaluated over test periods 30 to 39. Accuracy Threshold and Voting Effectiveness . As shown in Figure 9 and Figure 11, in general, when the centralized accuracy is above 0.5, decentralized consensus almost always improves or maintains the performance. However, when the centralized model falls below 0.5, there is a greater risk of performance degradation due to the amplification of poor local predictions in the voting process. Therefore, a minimum model quality threshold is needed to leverage decentralized voting. Robustness Against Outliers and Noise . The decentralized consensus methods help improve the overall performance even when the centralized experiments exhibit outliers or noise. For example, notable outliers are observed at Acc@2 for MLP at period 30, SGC at period 39, GraphSAGE at periods 33 and 38 in Figure 9, and Acc@2 for GraphSAGE and GCN at period 31 in Figure 11. In these cases, the performance difference between the best and worst centralized results can be as high as 5%. However, in majority voting under decentralized consensus, the final performances not only do not decline with such extreme values but steadily improve. This indicates that decentralized consensus provides robust aggregation, effectively mitigating localized prediction failures. Correlation Between Ambiguity and Improvement . For each testing period, we compute the level of agreement, which is defined as the number of validators (out of five) that correctly predict the ground truth label, for each edge and report the proportions of each agreement level. In Figure 10 and Figure 12, darker colors indicate stronger consensus among validators. In UCI dataset shown in Figure 10, we observe that the ambiguous predictions (lighter colors) appear in similar proportions across four backbone models. As a result, the performance improvements from centralized to decentralized variants are also relatively similar (1.76% in average). In contrast, Figure 12 reveals that for Memo-Tx dataset, GCN has a higher proportion of ambiguous decisions compared to the 27 other backbones. This leads to a larger performance gain (10.89%) under decentralized voting for GCN, while the performance gain for other three backbones is 2.98% in average. 30313233343536373839 Test Period60657075808590Acc@2 (%)Memo-Tx 30313233343536373839 Test Period5055606570758085Acc@3 (%)Memo-Tx 30313233343536373839 Test Period3540455055606570758085Acc@5 (%)Memo-Tx MLP GCNSAGE SGCDeSocial-MLP DeSocial-GCNDeSocial-SAGE DeSocial-SGC Figure 11: Testing performance at Memo-Tx dataset. The stars illustrate the centralized performances while the bars illustrate the decentralized performances in each testing period. MLP GCN SAGE SGC020406080100Proportion (%)Acc@2 MLP GCN SAGE SGC020406080100Proportion (%)Acc@3 MLP GCN SAGE SGC020406080100Proportion (%)Acc@5 0/5 Agree 1/5 Agree 2/5 Agree 3/5 Agree 4/5 Agree 5/5 Agree Figure 12: Distribution of validator agreement levels on the Memo-Tx dataset. Each set of ten bars corresponds to a single model evaluated over test periods 30 to 39. C.3 Consensus Analysis of Different Node Groups In each temporal snapshot, nodes with higher degrees are considered more active, while those with lower degrees are less active. To analyze performance across different activity levels, we divide the graph nodes into four quartiles in each test period based on their degrees. The lowest 25% nodes are assigned to Q1, while the highest 25% fall into Q4. Figure 13 illustrates the proportion of the
|
https://arxiv.org/abs/2505.21388v1
|
5/5 agreement (i.e., of all the agreements in this quartile, how many of them have all the validators given true votes) at Acc@2 on each quartile. Q1 nodes exhibit the lowest proportion of 5/5 agreement due to insufficient neighborhood information. Q2 and Q3 nodes gain more neighbors, enabling more reliable feature aggregation and increasing the proportion of 5/5 agreement. Q4 nodes gain the highest proportion of 5/5 agreement in most cases, because the nodes with the most neighborhoods have enough local information to make accurate predictions. However, for all backbones on UCI and GCN on GDELT, the proportion of 5/5 agreement in Q4 is lower than that of Q3 because the information learned by the model is redundant or even conflicting, and multiple models understand these structures in different ways, the disagreement increases instead. As higher quartiles achieve higher proportions of 5/5 agree, we can conclude that the contributions to 5/5 agree are mainly from the nodes with higher degrees, i.e., from more active users. D Broader Impacts DeSocial aims to empower users on social platforms by allowing them to choose personalized social network algorithms and validate predictions through decentralized voting. This shift from centralized to decentralized decision-making has many potential social implications. Positive Social Impacts. Firstly, by giving the user rights to select personalized social network algorithm, DeSocial reduces the risk of algorithmic centralization, which often serves platform 28 Q1 Q2 Q3 Q4020406080100Avg 5/5 Agreement (%) UCI Q1 Q2 Q3 Q4020406080100Avg 5/5 Agreement (%) Memo-Tx Q1 Q2 Q3 Q4020406080100Avg 5/5 Agreement (%) Enron Q1 Q2 Q3 Q4020406080100Avg 5/5 Agreement (%) GDELTAverage 5/5 Agreement by Degree Quartile MLP GCN SAGE SGCFigure 13: 5/5 agreement across degree node quartiles for different models and datasets. interests over user needs. Secondly, blockchain-based voting ensures that prediction results can be traced and verified, encouraging more transparent and auditable AI decisions. Thirdly, We give the community a novel solution to improve the personalized recommendations, giving an additional way to extend the existing state-of-the-art recommendation algorithms. Lastly, DeSocial aligns with the values of Web 3.0 infrastructure, contributing to ethical AI deployment in emerging digital economies. Negative Social Impacts. Firstly, improper algorithm selection may reduce prediciton quality. Secondly, misuse of Web 3.0 techniques like speculative behavior unrelated to genuine social interaction may undermine the utility of decentralized social media. Particularly, domination of the malicious validators can game or bias the outcomes, especially without proper incentive or trust mechanisms. Given the associated risks, we only implement this algorithm in a locally simulated blockchain currently. For future deployment, we need to solve the common problems in blockchain environment, such as trustworthy, fairness, security, and privacy, to ensure a broadly accepted and reliable social infrastructure for technical users. E Ethics Statement This work contributes to the design of decentralized AI frameworks by combining graph learning with blockchain-enabled consensus mechanisms. It emphasizes user autonomy through personalized model selection and multi-node validation, offering an alternative to traditional centralized decision- making pipelines. All experiments are carried out in a controlled local blockchain simulation environment, without involving real blockchain deployments or human participants. The datasets used
|
https://arxiv.org/abs/2505.21388v1
|
arXiv:2505.21391v1 [cs.LG] 27 May 2025Finite Sample Analysis of Linear Temporal Difference Learning with Arbitrary Features Zixuan Xie∗ University of Virginia xie.zixuan@email.virginia.eduXinyu Liu∗ University of Virginia xinyuliu@virginia.edu Rohan Chandra University of Virginia rohanchandra@virginia.eduShangtong Zhang University of Virginia shangtong@virginia.edu Abstract Linear TD( λ) is one of the most fundamental reinforcement learning algorithms for policy evaluation. Previously, convergence rates are typically established under the assumption of linearly independent features, which does not hold in many practical scenarios. This paper instead establishes the first L2convergence rates for linear TD( λ) operating under arbitrary features, without making any algorithmic modification or additional assumptions. Our results apply to both the discounted and average-reward settings. To address the potential non-uniqueness of solutions resulting from arbitrary features, we develop a novel stochastic approximation result featuring convergence rates to the solution set instead of a single point. 1 Introduction Temporal difference learning (TD, Sutton [1988]) is a fundamental algorithm in reinforcement learning (RL, Sutton and Barto [2018]), enabling efficient policy evaluation by combining dynamic programming [Bellman, 1966] with stochastic approximation (SA, Benveniste et al. [1990], Kushner and Yin [2003], Borkar [2009]). Its linear variant, linear TD( λ) [Sutton, 1988], emerges as a practical extension, employing linear function approximation to tackle large or continuous state spaces where tabular representations become impractical. Linear TD( λ) takes the dot product between features and weights to compute the approximated value. Establishing theoretical guarantees for linear TD( λ), particularly convergence rates, has been a major focus of research. Most existing works (Table 1), however, require the features used in linear TD to be linearly independent. As argued in Wang and Zhang [2024], this assumption is impractical in many scenarios. For example, in continual learning with sequentially arriving data [Ring, 1994, Khetarpal et al., 2022, Abel et al., 2023], there is no way to rigorously verify whether the features are independent or not. See Wang and Zhang [2024] for more discussion on the restrictions of the feature independence assumptions. Furthermore, Peter [1992], Tsitsiklis and Roy [1996, 1999] also outline the elimination of the linear independence assumption as a future research direction. While efforts have been made to eliminate the linear independence assumption [Wang and Zhang, 2024], they only provide asymptotic (almost sure) convergence guarantees in the discounted setting. By contrast, this paper establishes the first L2convergence rates for linear TD( λ) with arbitrary features in both discounted and average-reward settings . This success is enabled by a novel stochastic approximation result (Theorem 3) concerning the convergence rates to a solution set instead of a single point, driven by a novel Lyapunov function. This new result provides a unified approach ∗Equal contribution Preprint. applicable to both discounted (Theorem 1) and average-reward (Theorem 2) settings. Notably, we do not make any algorithmic modification and do not introduce any additional assumptions. Table 1 provides a detailed comparison of existing theoretical analyses for linear TD( λ), contextualizing our contributions within the landscape of prior work. Setting FeaturesNoise TypeRate Tsitsiklis and Roy [1996] γ <1 Independent Markovian Bhandari et al. [2018] γ <1 Independent Markovian ✓ Lakshminarayanan and Szepesvári [2018] γ <1
|
https://arxiv.org/abs/2505.21391v1
|
Independent i.i.d. ✓ Srikant and Ying [2019] γ <1 Independent Markovian ✓ Chen et al. [2023b] γ <1 Independent i.i.d. ✓ Wang and Zhang [2024] γ <1 Arbitrary Markovian Mitra [2025] γ <1 Independent Markovian ✓ Theorem 1 γ <1 Arbitrary Markovian ✓ Tsitsiklis and Roy [1999] γ= 1 Independent Markovian Zhang et al. [2021c] γ= 1 Independent Markovian ✓ Chen et al. [2025] γ= 1 Independent Markovian ✓ Theorem 2 γ= 1 Arbitrary Markovian ✓ Table 1: Comparison of finite-sample analyses for linear TD( λ). “Setting” indicates the problem setting: γ <1stands for the discounted setting and γ= 1 stands for the average reward setting. “Features” describes assumptions on the features. “Independent” indicates linear independence is assumed. “Arbitrary” indicates no assumption is made on features. “Noise Type” indicates the data generation process: Markovian samples or independent and identically distributed (i.i.d.) samples. “Rate” is checked if a convergence rate is provided. 2 Background Notations. We use ⟨x, y⟩.=x⊤yto denote the standard inner product in Euclidean spaces and ∥·∥ to denote the ℓ2norm for vectors and the associated induced operator norm (i.e., the spectral norm) for matrices, unless stated otherwise. A function fis said to be L-smooth (w.r.t. ∥·∥) if∀w, w′, f(w′)≤f(w)+⟨∇f(w), w′−w⟩+L 2∥w′−w∥2. For a matrix A,col(A)denotes its column space, ker(A)denotes its kernel, and A†denotes its Moore-Penrose inverse. When xis a point and Uis a set, we denote d(x, U).= inf y∈U∥x−y∥as the Euclidean distance from xtoU. For sets U, V , their Minkowski sum is U+V.={u+v|u∈U, v∈V}; andU⊥denotes the orthogonal complement of U. We use 0and1to denote the zero vector and the all-ones vector respectively, where the dimension is clear from context. For any square matrix A∈Rd×d(not necessarily symmetric), we say Ais negative definite (n.d.) if there exists a ξ >0such that x⊤Ax≤ −ξ∥x∥2∀x∈Rd. For any set E⊆Rd, we say Aisn.d. on Eif there exists a ξ >0such that x⊤Ax≤ −ξ∥x∥2∀x∈E. A is negative semidefinite (n.s.d.) if ξ= 0in the above definition. Markov Decision Processes. We consider an infinite horizon Markov Decision Process (MDP, Bellman [1957]) defined by a tuple (S,A, p, r, p 0), where Sis a finite set of states, Ais a finite set of actions, p:S × S × A → [0,1]is the transition probability function, r:S × A → Ris the reward function, and p0:S → [0,1]denotes the initial distribution. In this paper, we focus on the policy evaluation problem, where the goal is to estimate the value function of an arbitrary policy π:A × S → [0,1]. At the time step 0, an initial state S0is sampled from p0. At each subsequent time step t, the agent observes state St∈ S, executes an action At∼π(·|St), receives reward Rt+1.=r(St, At), and transitions to the next state St+1∼p(·|St, At). We use Pπto denote the state transition matrix induced by the policy π, i.e., Pπ[s, s′] =P a∈Aπ(a|s)p(s′|s, a). Letdπ∈R|S|be the stationary distribution of the Markov chain induced by the policy π. We use Dπto denote the diagonal matrix whose diagonal is dπ. Linear Function Approximation. In this paper, we use linear function approximation to approximate value functions vπ:S
|
https://arxiv.org/abs/2505.21391v1
|
→R(to be defined shortly). We consider a feature mapping x:S →Rdand a weight vector w∈Rd. We then approximate vπ(s)withx(s)⊤w. We use X∈R|S|×dto denote 2 the feature matrix, where the s-th row of Xisx(s)⊤. The approximated state-value function across all states can then be represented as the vector Xw∈R|S|. The goal is thus to find a wsuch that Xw closely approximates vπ. Discounted Setting. In the discounted setting, we introduce a discount factor γ∈[0,1). The (dis- counted) value function vπ:S →Rfor policy πis defined as vπ(s).=EP∞ i=0γiRt+i+1 St=s . We define the Bellman operator T:R|S|→R|S|asTv.=rπ+γPπv, where rπ∈R|S|is the vector of expected immediate rewards under π, with components rπ(s) =P aπ(a|s)r(s, a). With a λ∈[0,1], theλ-weighted Bellman operator Tλis defined as Tλv.= (1−λ)P∞ m=0λmTm+1v= rλ+γPλv, where rλ=P∞ k=0(λγ)kPk πrπ= (1−γλP π)−1rπ, Pλ=(1−λ)P∞ m=0(λγ)mPm+1 π = (1−λ)(1−γλP π)−1Pπ. This represents a weighted average of multi-step applications of T. It is well-known that vπis the unique fixed point of Tλ[Bertsekas and Tsitsiklis, 1996]. Linear TD( λ) is a family of TD learning algorithms that use eligibility traces to estimate vπ(s)of the fixed policy πwith linear function approximation. The algorithm maintains a weight vector wt∈Rdand an eligibility trace vector et∈Rd, with the following update rules: wt+1=wt+αt(Rt+1+γx(St+1)⊤wt−x(St)⊤wt)et, et=γλet−1+x(St), e−1=0. (Discounted TD) Here,{αt}is the learning rate. The eligibility trace ettracks recently visited states, assigning credit for the prediction error to multiple preceding states. Let A.=X⊤Dπ(γPλ−I)X, b.=X⊤Dπrλ, W∗.={w|Aw+b=0}. IfXhas a full column rank, Tsitsiklis and Roy [1996] proves that W∗is a singleton and {wt} converge to −A−1balmost surely. A key result used by Tsitsiklis and Roy [1996] is that the matrix Dπ(γPλ−I)is n.d. [Sutton, 1988]. As a result, the Amatrix is also n.d. when Xhas a full column rank. Wang and Zhang [2024] prove, without making any assumption on X, that W∗is always nonempty and the {wt}converges to W∗almost surely. A key challenge there is that without making assumptions on X,Ais only n.s.d. Average-Reward Setting. In the average-reward setting, the overall performance of a policy πis measured by the average reward Jπ.= lim T→∞1 TEhPT−1 t=0Rti . The corresponding (differential) value function is defined as vπ(s) = lim T→∞1 TPT−1 i=0E[(r(St+i, At+i)−Jπ)|St=s]. We define the Bellman operator T:R|S|→R|S|asTv.=rπ−Jπ1+Pπv. Similarly, the λ-weighted counterpart Tλis defined as Tλv.=rλ−Jπ 1−λ1+Pλv. Although vπis a fixed point of Tλ, it is not the unique fixed point. In fact, {vπ+c1|c∈R} (2) are all the fixed points of Tλ[Puterman, 2014]. Linear average-reward TD( λ) is an algorithm for estimating both Jπandvπusing linear function approximation and eligibility traces. The update rules are et=λet−1+x(St), e−1=0, wt+1=wt+αt(Rt+1−ˆJt+x(St+1)⊤wt−x(St)⊤wt)et, ˆJt+1=ˆJt+βt(Rt+1−ˆJt), (Average Reward TD) where {αt}and{βt}are learning rates. Let A.=X⊤Dπ(Pλ−I)X,b.=X⊤Dπ(rλ−Jπ 1−λ1),W∗.= w|Aw+b=0 . (4) IfXhas a full column rank and 1/∈col(X), Tsitsiklis and Roy [1999] proves that W∗is a singleton and{wt}converge to −A−1balmost surely. This is made possible by an important fact from the Perron-Frobenius theorem (see, e.g., Seneta [2006]) that w|w⊤Dπ(Pλ−I)w= 0 ={c1|c∈R}. (5) Zhang et al. [2021c] further provides a convergence rate, still assuming Xhas a full column rank but without assuming 1/∈col(X). When Xdoes not have a full column rank, to our knowledge, it is even not clear whether W∗is
|
https://arxiv.org/abs/2505.21391v1
|
always nonempty or not, much less the behavior of {wt}. 3 3 Main Results We start with our assumptions. As promised, we do not make any assumption on X. Assumption 3.1. The Markov chain associated with Pπis irreducible and aperiodic. Assumption LR. The learning rates are αt=α (t+t0)ξandβt=cβαt, where ξ∈(0.5,1],α >0, t0>0, and cβ>0are constants. Discounted Setting. Wang and Zhang [2024] proves the almost sure convergence of (Discounted TD) with arbitrary features by using ∥w−w∗∥2with an arbitrary and fixed w∗∈W∗as a Lyapunov function and analyzing the property of the ODEdw(t) dt=Aw(t). Since Ais only n.s.d., Wang and Zhang [2024] conducts their analysis in the complex number field. In this work, instead of following the ODE-based analysis originating from Tsitsiklis and Roy [1996], Borkar and Meyn [2000], we extend Srikant and Ying [2019] to obtain convergence rates by using d(w, W ∗)2as the Lyapunov function. To our knowledge, this is the first time that such distance function to a set is used as the Lyapunov function to analyze RL algorithms, which is our key technical contribution from the methodology aspect. According to Theorem 1 of Wang and Zhang [2024], W∗is nonempty, and ap- parently convex and closed.2LetΓ(w).= arg min w∗∈W∗∥w−w∗∥be the orthogonal projection to W∗. We then define L(w).=1 2d(w, W ∗)2=1 2∥w−Γ(w)∥2. Two important and highly non-trivial observations are (i)∇L(w) =w−Γ(w)(Example 3.31 of Beck [2017]), (ii)L(w)is 1-smooth w.r.t. ∥·∥(Example 5.5 of Beck [2017]). Both (i) and (ii) result from the fact that W∗is nonempty, closed, and convex. Using L(w)as the Lyapunov function together with more characterization of ∇L(w)(Section 5.2), we obtain Theorem 1. Let Assumptions 3.1and LRhold and λ∈[0,1]. Then for sufficiently large t0 andα, there exist some constants CThm1 andκ1>1such that the iterates {wt}generated by(Discounted TD) satisfy for all t E d(wt, W∗)2 ≤CThm1 t0 t⌊κ1⌋d(w0, w∗)2+ ln(t+t0) (t+t0)min(2 ξ−1,⌊κ1⌋−1) . The proof is in Section 5.2. Notably, Lemma 3 of Wang and Zhang [2024] states that for any w∗, w∗∗∈W∗, it holds that Xw∗=Xw∗∗. We then define ˆvπ.=Xw∗ (6) for any w∗∈W∗. Theorem 1 then also gives the L2convergence rate of the value estimate, i.e., the rate at which Xwtconverges to ˆvπ. The value estimate ˆvπis the unique fixed point of a projected Bellman equation. See Wang and Zhang [2024] for more discussion on the property of ˆvπ. Average Reward Setting. Characterizing W∗is much more challenging. We first present a novel decomposition of the feature matrix X. To this end, define m.= rank( X)≤min{|S|, d}. Ifm= 0, all the results in this work are trivial and we thus discuss only the case m≥1. Lemma 1. There exist matrices X1, X2such that X=X1+X2with the following properties (1) rank( X1) =m−I1∈col(X)and1/∈X1(2)X2=1θ⊤withθ∈Rd. The proof is in Section B.1 with Ibeing the indicator function. Essentially, X2is a rank one matrix with identical rows θ(i.e., the i-th column of X2isθi1). To our knowledge, this is the first time that such decomposition is used to analyze average-reward RL algorithms, which is our second technical contribution from the methodology aspect. This decomposition is useful in three aspects. First, we have A=X⊤ 1Dπ(Pλ−I)X1(Lemma 14). Second, this decomposition is the
|
https://arxiv.org/abs/2505.21391v1
|
key to prove thatW∗is nonempty (Lemma 15). Third, this decomposition is the key to characterize W∗in that W∗={w∗}+ ker( X1)withw∗being any vector in W∗(Lemma 16). To better understand this characterization, we note that ker(X1) ={w|Xw=c1, c∈R}(Lemma 16). As a result, adding 2This theorem only discusses the case of λ= 0. The proof for a general λ∈[0,1]is exactly the same up to change of notations. 4 anyw0∈ker(X1)to a weight vector wchanges the resulting value function Xw only by c1. Two values v1andv2can be considered “duplication” if v1−v2=c1(cf.(2)). So intuitively, ker(X1)is the source of the “duplication”. With the help of this novel decomposition, we obtain Theorem 2. Let Assumptions 3.1and LRhold and λ∈[0,1). Then for sufficiently large α, t0andcβ, there exist some constants CThm2 andκ2>1such that the iterates {wt}generated by(Average Reward TD) satisfy for all t Eh (ˆJt−Jπ)2+d(wt,W∗)2i ≤CThm2 t0 t⌊κ2⌋h (ˆJ0−Jπ)2+d(w0,W∗)2i +CThm2 ln(t+t0) (t+t0)min(2 ξ−1,⌊κ2⌋−1) . The proof is in Section 5.3. Stochastic Approximation. We now present a general stochastic approximation result to prove Theorems 1and2. The notations in this part are independent of the rest of the paper. We consider a general iterative update rule for a weight vector w∈Rd, driven by a time-homogeneous Markov chain{Yt}evolving in a possibly infinite space Y: wt+1=wt+αtH(wt, Yt+1), (SA) where H:Rd× Y → Rddefines the incremental update. Assumption A1. There exists a constant CA1such that supy∈Y∥H(0, y)∥<∞, ∥H(w1, y)−H(w2, y)∥ ≤CA1∥w1−w2∥ ∀w1, w2, y. Assumption A2. {Yt}has a unique stationary distribution dY. Leth(w).=Ey∼dY[H(w, y)]. Assumption A1 then immediately implies that ∥h(w1)−h(w2)∥ ≤CA1∥w1−w2∥ ∀w1, w2. In many existing works about stochastic approximation [Borkar and Meyn, 2000, Chen et al., 2021b, Borkar et al., 2021, Qian et al., 2024], it is assumed that h(w) = 0 adopts a unique solution. To work with the challenges of linear TD with arbitrary features, we relax this assumption and consider a setW∗. Importantly, W∗does not need to contain all solutions to h(w) = 0 . Instead, we make the following assumptions on W∗. Assumption A3. W∗is nonempty, closed, and convex. Notably, W∗does not need to be bounded. Assumption A3ensures that the orthogonal projec- tion to W∗is well defined, allowing us to define Γ(w).= arg min w∗∈W∗∥w−w∗∥, L(w).= 1 2∥w−Γ(w)∥2. As discussed before, Assumption A3ensures that ∇L(w) =w−Γ(w)andL is 1-smooth w.r.t. ∥·∥[Beck, 2017]. We further assume that the expected update h(wt)decreases L(wt)in the following sense, making L(w)a candidate Lyapunov function. Assumption A4. There exists a constant CA4>0such that almost surely, ⟨∇L(wt), h(wt)⟩ ≤ − CA4L(wt). Lastly, we make the most “unnatural” assumption of W∗. Assumption A5. There exists a matrix Xand constants CA5andτ∈[0,1)such that (1) ∀w∗∈W∗, ∥Xw∗∥ ≤CA5; (2)∀w, y,∥H(w, y)∥ ≤CA5(∥Xw∥+ 1) ; (3) For any n≥1: ∥h(w)−E[H(w, Y t+n)|Yt]∥ ≤CA5τn(∥Xw∥+ 1) (7) This assumption is technically motivated but trivially holds in our analyses of (Discounted TD) and(Average Reward TD) . Specifically, Assumption A1immediately leads to at-most-linear growth ∥H(w, y)∥ ≤CA1,1(∥w∥+ 1) for some constant CA1,1. However, this bound is insufficient for our analysis because ∥w∥ ≤ ∥ w−Γ(w)∥+∥Γ(w)∥butΓ(w)∈W∗can be unbounded. By Assumption A5, we can have ∥Xw∥ ≤ ∥ Xw−XΓ(w)∥+∥XΓ(w)∥ ≤ ∥ X∥∥w−Γ(w)∥+CA5. The inequality (7)is related to geometrical mixing of
|
https://arxiv.org/abs/2505.21391v1
|
the chain and we additionally include Xw in the bound for the same reason. We now present our general results regarding the convergence rate of (SA) to W∗. 5 Theorem 3. Let Assumptions A1-A5and LRhold. Denote κ.=αC A4, then there exist some constants t0andCThm3, such that the iterates {wt}generated by (SA) satisfy for all t E[L(wt)]≤CThm3,1 t0 t⌊κ⌋L(w0) +CThm3,2 ln(t+t0) (t+t0)min(2 ξ−1,⌊κ⌋−1) . The proof is in Section 5.1. 4 Related Works Most prior works regarding the convergence of linear TD summarized in Table 1 rely on having linearly independent features. In fact, the reliance on feature independence goes beyond linear TD and exists in almost all previous analyses of RL algorithms with linear function approximation, see, e.g., Sutton et al. [2008, 2009], Maei [2011], Hackman [2013], Yu [2015, 2016], Zou et al. [2019], Yang et al. [2019], Zhang et al. [2020b], Bo et al. [2020], Xu et al. [2020a], Zhang et al. [2020a], Xu et al. [2020b], Wu et al. [2020], Chen et al. [2021a], Long et al. [2021], Qiu et al. [2021], Zhang et al. [2021a,b], Xu et al. [2021], Zhang et al. [2022], Zhang and Whiteson [2022], Zhang et al. [2023], Chen et al. [2023a], Nicolò et al. [2024], Shaan and Siva [2024], Yue et al. [2024], Swetha et al. [2024], Liu et al. [2025a], Qian and Zhang [2025], Sreejeet and Aritra [2025], Yang et al. [2025], Chen et al. [2025], Liu et al. [2025b]. But as argued by Peter [1992], Tsitsiklis and Roy [1996, 1999], Wang and Zhang [2024], relaxing this assumption is an important research direction. This work can be viewed as an extension of Wang and Zhang [2024], Zhang et al. [2021c]. In terms of(Discounted TD) , we extend Wang and Zhang [2024] by proving a finite sample analysis. Though we rely on the characterization of W∗from Wang and Zhang [2024], the techniques we use for finite sample analysis are entirely different from the techniques of Wang and Zhang [2024] for almost sure asymptotic convergence. In terms of (Average Reward TD) , we extend Zhang et al. [2021c] by allowing Xto be arbitrary. Essentially, key to Zhang et al. [2021c] is their proof that Ais n.d. on a subspace E, assuming Xhas a full column rank. We extend Zhang et al. [2021c] in that we give a finer and more detailed characterization of the counterparts of their Ethrough the novel decomposition of the features (Lemma 1) and establish the n.d. property under weaker conditions (i.e., without assuming Xhas a full column rank). Our improvements are made possible by the novel Lyapunov function L(w)and we argue that this Lyapunov function can be used to analyze many other linear RL algorithms with arbitrary features. In terms of stochastic approximation, our Theorem 3is novel in that it allows convergence to a possibly unbounded set. By contrast, most prior works about stochastic approximation study convergence to a point [Borkar and Meyn, 2000, Borkar et al., 2021, Chen et al., 2020, 2021b, Zhang et al., 2022, Chen et al., 2023b, Qian et al., 2024, Liu et al., 2025a]. In the case of
|
https://arxiv.org/abs/2505.21391v1
|
convergence to a set, most prior works require the set to be bounded [Kushner and Yin, 2003, Borkar, 2009, Liu et al., 2025a]. Only a few prior works allow stochastic approximation to converge to an unbounded set, see, e.g., Bravo and Cominetti [2022], Chen [2025], Blaser and Zhang [2025], which apply to only tabular RL algorithms. 5 Proofs of the Main Results 5.1 Proof of Theorem 3 Proof. From the 1-smoothness of L(w)and (SA), we can get L(wt+1)≤L(wt) +αt⟨wt−Γ(wt), h(wt)⟩ +αt⟨wt−Γ(wt), H(wt, Yt)−h(wt)⟩+1 2α2 t∥H(wt, Yt)∥2. (8) We then bound the RHS one by one. ⟨w−Γ(w), h(w)⟩is already bounded in Assumption A4. Lemma 2. There exists a positive constant C2, such that for any w, ∥Xw∥ ≤C2(∥w−Γ(w)∥+ 1). The proof is in Section C.1. With Lemma 2and Assumption A5, the last term in (8)can be bounded easily. 6 Lemma 3. There exists a constant C3such that ∥H(wt, Yt)∥2≤C3(∥wt−Γ(wt)∥2+ 1) . The proof is in Section C.2. To bound ⟨wt−Γ(wt), H(wt, Yt)−h(wt)⟩, leveraging (7), we define τα.= min {n≥0|CA5τn≤α} (9) as the number of steps that the Markov chain needs to mix to an accuracy α. In addition, we denote a shorthand αt1,t2.=Pt2 i=t1αi. Then with techniques from Srikant and Ying [2019], we obtain Lemma 4. There exists a constant C4such that E[⟨wt−Γ(wt), H(wt, Yt)−h(wt)⟩]≤C4αt−ταt,t−1(∥wt−Γ(wt)∥2+ 1). The proof is in Section C.3. Plugging all the bounds back to (8), we obtain Lemma 5. There exists some Dt=O(αtαt−ταt,t−1), such that E[L(wt+1)]≤(1−CA4αt)E[L(wt)] +Dt. The proof is in Section C.4. Recursively applying Lemma 5 then completes the proof of Theorem 3 (See Section C.5 for details). In the following sections, we first map the general update (SA) to(Discounted TD) and(Average Reward TD) by defining H(w, y),h(w), and L(w)properly. Then we bound the remaining term ⟨∇L(wt), h(wt)⟩to complete the proof. 5.2 Proof of Theorem 1 Proof. We first rewrite (Discounted TD) in the form of (SA) . To this end, we define Yt+1.= (St, At, St+1, et), which evolves in an infinite space Y.=S × A × S × { e| ∥e∥ ≤Ce}with Ce.=maxs∥x(s)∥ 1−γλbeing the straightforward bound of supt∥et∥. We define the incremental update H:Rd× Y → Rdas H(w, y) = (r(s, a) +γx(s′)⊤w−x(s)⊤w)e, (10) using shorthand y= (s, a, s′, e). We now proceed to verifying the assumptions of Theorem 3. Assumption A1 is verified by the following lemma. Lemma 6. There exists some finite C6such that ∥H(w1, y)−H(w2, y)∥ ≤C6∥w1−w2∥ ∀w1, w2, y. Moreover, supy∈Y∥H(0, y)∥<∞. The proof is in Section D.1. For Assumption A2, Theorem 3.2 of Yu [2012] confirms that {Yt}has a unique stationary distribution dY. Yu [2012] also computes that h(w).=Ey∼dY[H(w, y)] =Aw+b. Assumption A3 trivially holds by the definition of W∗. For Assumption A4, the key observation is that AΓ(w) +b= 0always holds because Γ(w)∈W∗. Then we have h(w) =Aw+b= (Aw+b)−(AΓ(w) +b) =A(w−Γ(w)). Thus the term ⟨∇L(w), h(w)⟩can be written as (w−Γ(w))⊤A(w−Γ(w)). We now prove that for whatever X, it always holds that Ais n.d. on ker(A)⊥. Lemma 7. There exists a constant C7>0such that for ∀w∈ker(A)⊥,w⊤Aw≤ −C7∥w∥2. Furthermore, for any w∈Rd, it holds that w−Γ(w)∈ker(A)⊥. The proof is in
|
https://arxiv.org/abs/2505.21391v1
|
Section D.3. We then have ⟨wt−Γ(wt), A(wt−Γ(wt))⟩ ≤ − C7∥wt−Γ(wt)∥2, which satisfies Assumption A4. For Assumption A5,(6)verifies Assumption A5(1). Assumption A5(2) is verified by the following lemma. 7 Lemma 8. There exists a constant C8such that for ∀w, y,∥H(w, y)∥ ≤C8(∥Xw∥+ 1) . The proof is in Section D.4. Assumption A5(3) is verified following a similar procedure as Lemma 6.7 in Bertsekas and Tsitsiklis [1996] (Lemma 18). Invoking Theorem 3 then completes the proof. 5.3 Proof of Theorem 2 Proof. We recall that in view of Lemma 1,ker(X1)creates “duplication” in value estimation. We, therefore, define the projection matrix Π∈Rd×dthat projects a vector into the orthogonal comple- ment of ker(X1), i.e., Πw.= arg min w′∈ker(X1)⊥∥w−w′∥. It can be computed that Π = X† 1X1. We now examine the sequence {Πwt}with{wt}being the iterates of (Average Reward TD) and consider the combined parameter vector ewt.=ˆJt Πwt ∈R1+d. The following lemma characterizes the evolution of ewt. Let Yt= (St, At, St+1, et)∈ S × A × S ×n e∈Rd| ∥e∥ ≤maxs∥x(s)∥ 1−λo , then Lemma 9. ewt+1=ewt+αt(eA(Yt)ewt+eb(Yt)), where we have, with y= (s, a, s′, e), eA(y) =−cβ 0 −ΠeΠe(x(s′)⊤−x(s)⊤) ,eb(y) = cβr(s, a) r(s, a)Πe . This view is inspired by Zhang et al. [2021c] and the proof is in Section E.1. We now apply Theorem 3 to{ewt}. The verification of Assumptions A1andA2is identical to that in Section 5.2and is thus omitted. For Assumption A3, we define fW∗.= Jπ Πw w∈W∗ . It is apparently nonempty, closed, and convex. For Assumption A4, we define eA.=Ey∼dYh eA(y)i andeb.=Ey∼dYh eb(y)i and therefore realize thehin(SA) ash(ew) =eAew+eb. Noticing that eAΓ(ew) +eb=0(Lemma 19), we then have h(ew) =eA(ew−Γ(ew)). The term ⟨∇L(ew), h(ew)⟩can thus be written as (ew−Γ(ew))⊤eA(ew−Γ(ew)). Next, we prove that when cβis large enough, eAis n.d. on R×ker(X1)⊥. Lemma 10. Letcβbe sufficiently large. Then there exists a constant C10>0such that ∀z∈ R×ker(X1)⊥,zTeAz≤ −C10∥z∥2. The proof is in Section E.3. By definition, we have ewt∈R×ker(X1)⊥andΓ(ew)∈R×ker(X1)⊥. Soew−Γ(ew)∈R×ker(X1)⊥, yielding ⟨ewt−Γ(ewt),eA(ewt−Γ(ewt))⟩ ≤ − C10∥ewt−Γ(ewt)∥2, which verifies Assumption A4. For Assumption A5, we define eX= 10⊤ 0X . Assumption A5(1) is verified below. Lemma 11. There exists a positive constant C11, such that for any ew∈fW∗, eXew =C11. The proof is in Section E.4. With H(ew, y) =eA(y)ew+eb(y), the verification of Assumption A5(2) and (3) is similar to Lemmas 8and18and is thus omitted. Invoking Theorem 3 then yields the convergence rate of E[L(ewt)], i.e., the convergence rate of d(ewt,fW∗)2by the definition of L. The next key observation is that d(ewt,fW∗)2= (ˆJt−Jπ)2+d(wt,W∗)2(Lemma 20), which completes the proof. 8 6 Experiments We now empirically examine linear TD with linearly dependent features. Following the practice of Sutton and Barto [2018], we use constant learning rates αandβinstead of αtandβtto facilitate experiments. We use a variant of Boyan’s chain [Boyan, 1999] with 15 states ( |S|= 15 ) and 5 actions (|A|= 5) under a uniform policy π(a|s) = 1 /|A|, where the feature matrix X∈R15×5is designed to be of rank 3 (more details in Section F). The weight convergence to a set is indeed observed. It is within expectation that different λrequires different α, β. 0.00.51.01.5 Steps×106246=0.9
|
https://arxiv.org/abs/2505.21391v1
|
d(wt,W*)=0.1 =0.005 =0.01 0.00.51.01.5 Steps×106510=0.5 =0.005 =0.01 0.00.51.01.5 Steps×1061020=0.9 =0.005 =0.01 Figure 1: Convergence of (Discounted TD) withγ= 0.9, α∈ {0.005,0.01}. Curves are averaged over 10 runs with shaded regions (too small to be visible) indicating standard errors. 0.00.51.01.5 Steps×1061.01.21.4d(wt,W*)=0.1, =0.01 =0.01 =0.02 =0.1 0.00.51.01.5 Steps×1060.250.500.75=0.5, =0.01 =0.01 =0.02 =0.1 0.00.51.01.5 Steps×106123=0.9, =0.01 =0.01 =0.02 =0.1 Figure 2: Convergence of (Average Reward TD) withβ= 0.01, α∈ {0.01,0.02,0.1}. Curves are averaged over 10 runs with shaded regions (too small to be visible) indicating standard errors. 7 Conclusion This paper provides the first finite sample analysis of linear TD with arbitrary features in both discounted and average reward settings, fulfilling the long standing desiderata of Peter [1992], Tsitsiklis and Roy [1996, 1999], enabled by a novel stochastic approximation result concerning the convergence rate to a set. The key methodology contributions include a novel Lyapunov function based on the distance to a set and a novel decomposition of the feature matrix for the average-reward setting. We envision the techniques developed in this work can easily transfer to the analyses of other linear RL algorithms. That being said, one limitation of the work is its focus on linear function approximation. Extension to neural networks with neural tangent kernels (cf. Cai et al. [2019]) is a possible future work. Another limitation is that this work considers only L2convergence rates but the convergence mode of random variables are versatile. Establishing almost sure convergence rates, Lpconvergence rates, and high probability concentration bounds (cf. Qian et al. [2024]) is also a possible future work. Acknowledgments and Disclosure of Funding This work is supported in part by the US National Science Foundation under grants III-2128019 and SLES-2331904. 9 References David Abel, André Barreto, Benjamin Van Roy, Doina Precup, Hado van Hasselt, and Satinder Singh. A definition of continual reinforcement learning. ArXiv Preprint, 2023. Amir Beck. First-order methods inoptimization. SIAM, 2017. Richard Bellman. A markovian decision process. Journal ofmathematics andmechanics, 1957. Richard Bellman. Dynamic programming. Science, 1966. Albert Benveniste, Michel Métivier, and Pierre Priouret. Adaptive Algorithms andStochastic Approximations . Springer, 1990. Dimitri P Bertsekas and John N Tsitsiklis. Neuro-Dynamic Programming . Athena Scientific Belmont, MA, 1996. Jalaj Bhandari, Daniel Russo, and Raghav Singal. A finite time analysis of temporal difference learning with linear function approximation. In Proceedings oftheConference onLearning Theory, 2018. Ethan Blaser and Shangtong Zhang. Asymptotic and finite sample analysis of nonexpansive stochastic approxi- mations with markovian noise. ArXiv Preprint, 2025. Liu Bo, Liu Ji, Ghavamzadeh Mohammad, Mahadevan Sridhar, and Petrik Marek. Finite-sample analysis of proximal gradient td algorithms. ArXiv Preprint, 2020. Vivek Borkar, Shuhang Chen, Adithya Devraj, Ioannis Kontoyiannis, and Sean Meyn. The ode method for asymptotic statistics in stochastic approximation and reinforcement learning. ArXiv Preprint, 2021. Vivek S Borkar. Stochastic approximation: adynamical systems viewpoint. Springer, 2009. Vivek S Borkar and Sean P Meyn. The ode method for convergence of stochastic approximation and reinforce- ment learning. SIAM Journal onControl andOptimization, 2000. Justin A. Boyan. Least-squares temporal difference learning. In Proceedings oftheInternational Conference on Machine Learning, 1999. Mario Bravo and Roberto Cominetti. Stochastic fixed-point iterations for nonexpansive maps: Convergence and error
|
https://arxiv.org/abs/2505.21391v1
|
bounds. ArXiv Preprint, 2022. Qi Cai, Zhuoran Yang, Jason D Lee, and Zhaoran Wang. Neural temporal-difference and q-learning provably converge to global optima. ArXiv Preprint, 2019. Xuyang Chen, Jingliang Duan, Yingbin Liang, and Lin Zhao. Global convergence of two-timescale actor-critic for solving linear quadratic regulator. Proceedings oftheAAAI Conference onArtificial Intelligence , 2023a. Zaiwei Chen. Non-asymptotic guarantees for average-reward q-learning with adaptive stepsizes. ArXiv Preprint , 2025. Zaiwei Chen, Siva Theja Maguluri, Sanjay Shakkottai, and Karthikeyan Shanmugam. Finite-sample analysis of contractive stochastic approximation using smooth convex envelopes. ArXiv Preprint, 2020. Zaiwei Chen, Siva Theja Maguluri, Sanjay Shakkottai, and Karthikeyan Shanmugam. Finite-sample analysis of off-policy td-learning via generalized bellman operators. ArXiv Preprint, 2021a. Zaiwei Chen, Siva Theja Maguluri, Sanjay Shakkottai, and Karthikeyan Shanmugam. A lyapunov theory for finite-sample guarantees of asynchronous q-learning and td-learning variants. ArXiv Preprint, 2021b. Zaiwei Chen, Siva Theja Maguluri, and Martin Zubeldia. Concentration of contractive stochastic approximation: Additive and multiplicative noise. ArXiv Preprint, 2023b. Zaiwei Chen, Sheng Zhang, Zhe Zhang, Shaan Ul Haque, and Siva Theja Maguluri. A non-asymptotic theory of seminorm lyapunov stability: From deterministic to stochastic iterative algorithms. ArXiv Preprint, 2025. Leah M Hackman. Faster gradient-td algorithms, 2013. Khimya Khetarpal, Matthew Riemer, Irina Rish, and Doina Precup. Towards continual reinforcement learning: A review and perspectives. Journal ofArtificial Intelligence Research, 2022. Harold Kushner and G George Yin. Stochastic approximation andrecursive algorithms andapplications . Springer Science & Business Media, 2003. 10 Chandrashekar Lakshminarayanan and Csaba Szepesvári. Linear stochastic approximation: How far does constant step-size and iterate averaging go? In Proceedings oftheInternational Conference onArtificial Intelligence andStatistics, 2018. Shuze Liu, Shuhang Chen, and Shangtong Zhang. The ODE method for stochastic approximation and reinforce- ment learning with markovian noise. Journal ofMachine Learning Research, 2025a. Xinyu Liu, Zixuan Xie, and Shangtong Zhang. Linear q-learning does not diverge in l2: Convergence rates to a bounded set. In Proceedings oftheInternational Conference onMachine Learning, 2025b. Yang Long, Zheng Gang, Zhang Yu, Zheng Qian, Li Pengfei, and Pan Gang. On convergence of gradient expected sarsa( λ). In Proceedings oftheAAAI Conference onArtificial Intelligence, 2021. Hamid Reza Maei. Gradient temporal-difference learning algorithms. PhD thesis, University of Alberta, 2011. Aritra Mitra. A simple finite-time analysis of td learning with linear function approximation. IEEE Transactions onAutomatic Control, 2025. Dal Fabbro Nicolò, Adibi Arman, Mitra Aritra, and J. Pappas George. Finite-time analysis of asynchronous multi-agent td learning. ArXiv Preprint, 2024. Dayan Peter. The convergence of td( λ) for general λ.Machine Learning, 1992. Martin L Puterman. Markov decision processes: discrete stochastic dynamic programming . John Wiley & Sons, 2014. Xiaochi Qian and Shangtong Zhang. Revisiting a design choice in gradient temporal difference learning. In Proceedings oftheInternational Conference onLearning Representations, 2025. Xiaochi Qian, Zixuan Xie, Xinyu Liu, and Shangtong Zhang. Almost sure convergence rates and concentration of stochastic approximation and reinforcement learning with markovian noise. ArXiv Preprint, 2024. Shuang Qiu, Zhuoran Yang, Jieping Ye, and Zhaoran Wang. On finite-time convergence of actor-critic algorithm. IEEE Journal onSelected Areas inInformation Theory, 2021. Mark Bishop Ring. Continual learning inreinforcement environments . PhD thesis, The University of Texas at Austin, 1994. Eugene Seneta. Non-negative matrices andMarkov chains. Springer Science
|
https://arxiv.org/abs/2505.21391v1
|
& Business Media, 2006. Ul Haque Shaan and Theja Maguluri Siva. Stochastic approximation with unbounded markovian noise: A general-purpose theorem. ArXiv Preprint, 2024. Maity Sreejeet and Mitra Aritra. Adversarially-robust td learning with markovian data: Finite-time rates and fundamental limits. ArXiv Preprint, 2025. Rayadurgam Srikant and Lei Ying. Finite-time error bounds for linear stochastic approximation andtd learning. InProceedings oftheConference onLearning Theory, 2019. Richard S. Sutton. Learning to predict by the methods of temporal differences. Machine Learning, 1988. Richard S Sutton and Andrew G Barto. Reinforcement Learning: AnIntroduction (2nd Edition) . MIT press, 2018. Richard S. Sutton, Csaba Szepesvári, and Hamid Reza Maei. A convergent o(n) temporal-difference algorithm for off-policy learning with linear function approximation. In Advances inNeural Information Processing Systems, 2008. Richard S. Sutton, Hamid Reza Maei, Doina Precup, Shalabh Bhatnagar, David Silver, Csaba Szepesvári, and Eric Wiewiora. Fast gradient-descent methods for temporal-difference learning with linear function approximation. In Proceedings oftheInternational Conference onMachine Learning, 2009. Ganesh Swetha, Uddin Mondal Washim, and Aggarwal Vaneet. Order-optimal global convergence for average reward reinforcement learning via actor-critic approach. ArXiv Preprint, 2024. John N. Tsitsiklis and Benjamin Van Roy. Analysis of temporal-diffference learning with function approximation. InIEEE Transactions onAutomatic Control, 1996. John N. Tsitsiklis and Benjamin Van Roy. Average cost temporal-difference learning. Automatica, 1999. 11 Jiuqi Wang and Shangtong Zhang. Almost sure convergence of linear temporal difference learning with arbitrary features. ArXiv Preprint, 2024. Yue Wu, Weitong Zhang, Pan Xu, and Quanquan Gu. A finite-time analysis of two time-scale actor-critic methods. In Advances inNeural Information Processing Systems, 2020. Tengyu Xu, Zhe Wang, and Yingbin Liang. Improving sample complexity bounds for (natural) actor-critic algorithms. In Advances inNeural Information Processing Systems, 2020a. Tengyu Xu, Zhe Wang, and Yingbin Liang. Non-asymptotic convergence analysis of two time-scale (natural) actor-critic algorithms. ArXiv Preprint, 2020b. Tengyu Xu, Zhuoran Yang, Zhaoran Wang, and Yingbin Liang. Doubly robust off-policy actor-critic: Conver- gence and optimality. ArXiv Preprint, 2021. Peng Yang, Jin Kaicheng, Zhang Liangyu, and Zhang Zhihua. Finite sample analysis of distributional td learning with linear function approximation. ArXiv Preprint, 2025. Zhuoran Yang, Yongxin Chen, Mingyi Hong, and Zhaoran Wang. Provably global convergence of actor-critic: A case for linear quadratic regulator with ergodic cost. In Advances inNeural Information Processing Systems , 2019. Huizhen Yu. Least squares temporal difference methods: An analysis under general conditions. SIAM Journal onControl andOptimization, 2012. Huizhen Yu. On convergence of emphatic temporal-difference learning. In Proceedings oftheConference on Learning Theory, 2015. Huizhen Yu. Weak convergence properties of constrained emphatic temporal-difference learning with constant and slowly diminishing stepsize. Journal ofMachine Learning Research, 2016. Wang Yue, Zhou Yi, and Zou Shaofeng. Finite-time error bounds for greedy-gq. Machine Learning, 2024. Shangtong Zhang and Shimon Whiteson. Truncated emphatic temporal difference methods for prediction and control. Journal ofMachine Learning Research, 2022. Shangtong Zhang, Bo Liu, and Shimon Whiteson. GradientDICE: Rethinking generalized offline estimation of stationary values. In Proceedings oftheInternational Conference onMachine Learning, 2020a. Shangtong Zhang, Bo Liu, Hengshuai Yao, and Shimon Whiteson. Provably convergent two-timescale off- policy actor-critic with function approximation. In Proceedings oftheInternational Conference onMachine Learning, 2020b. Shangtong Zhang, Yi Wan, Richard S. Sutton, and Shimon Whiteson. Average-reward
|
https://arxiv.org/abs/2505.21391v1
|
off-policy policy evaluation with function approximation. In Proceedings oftheInternational Conference onMachine Learning , 2021a. Shangtong Zhang, Hengshuai Yao, and Shimon Whiteson. Breaking the deadly triad with a target network. In Proceedings oftheInternational Conference onMachine Learning, 2021b. Shangtong Zhang, Remi Tachet, and Romain Laroche. Global optimality and finite sample analysis of softmax off-policy actor critic under state distribution mismatch. Journal ofMachine Learning Research, 2022. Shangtong Zhang, Remi Tachet Des Combes, and Romain Laroche. On the convergence of sarsa with linear function approximation. In Proceedings oftheInternational Conference onMachine Learning, 2023. Sheng Zhang, Zhe Zhang, and Siva Theja Maguluri. Finite sample analysis of average-reward td learning and q-learning. In Advances inNeural Information Processing Systems, 2021c. Shaofeng Zou, Tengyu Xu, and Yingbin Liang. Finite-sample analysis for sarsa with linear function approxima- tion. In Advances inNeural Information Processing Systems, 2019. 12 A Auxiliary Lemmas and Notations Lemma 12 (Discrete Gronwall Inequality, Lemma 8 in Section 11.2 of Borkar [2009]) .For non- negative real sequences {xn, n≥0}and{an, n≥0}and scalar L≥0, it holds xn+1≤C+LPn i=0aixi∀n=⇒xn+1≤(C+x0) exp( LPn i=0ai)∀n. Lemma 13 (Lemma 11 of Zhang et al. [2022]) .For sufficiently large t0, it holds that ταt=O(log(t+t0)), α t−ταt,t−1=Olog(t+t0) (t+t0)ξ . Lemma 13 ensures that there exists some t >0(depending on t0) such that for all t≥t, it holds thatt≥ταt. Also, it ensures that for sufficiently large t0, we have αt−ταt,t−1<1. Throughout the appendix, we always assume t0is sufficiently large and t≥t. We will refine (i.e., increase) talong the proof when necessary. B Proofs in Section 3 B.1 Proof of Lemma 1 Proof. Letxi∈Rddenote the i-th column of X. Without loss of generity, let the first mcolumns be linearly independent. Case 1: When 1∈col(X), there must exist mscalars {ci}such thatPm i=1cixi=1. Apparently, at least one of {ci}must be nonzero. Without loss of generity, let xm̸= 0. We then have xm=1 cm(1−m−1X i=1cixi). In other words, xmcan be expressed as linear combination of {x1, . . . , x m−1}and1. Since X has a column rank m, we are able to express {xm+1, . . . , x d}by linear combination {x1, . . . , x m} and thus further by linear combination of {x1, . . . , x m−1}and1. Let Z1.= [x1, . . . , x m−1]be the firstm−1columns of XandZ2.= [xm, . . . , x d]be the rest. We now know that there exists some C∈R(m−1)×(d−m+1)(i.e., coefficients of the lienar combination) such that Z2=Z1C+ [θm1, . . . , θ d1], where θm, . . . θ dare scalars (i.e., “coordinates” along the 1-axis), e.g., θm=1 cm. This means that we can express Xas X= [Z1Z1C] + [θ11, . . . , θ d1] (11) withθ1=···=θm−1= 0. Now define X1.= [Z1Z1C], X2.= [θ11, . . . , θ d1]. We note that 1/∈col(Z1). Otherwise, there would exist scalars {c′ i}such thatPm−1 i=1c′ ixi=1. Then we getPm−1 i=1(ci−c′ i)xi+cmxm= 0, which is impossible because {xi}i=1,···,mare linearly independent. Since col(X1) = col( Z1), we then have 1/∈col(X1). Case 2: When 1/∈col(X), we can trivially define X1=XandX2= 0. Additionally, we can still further decompose X1as X1= [Z1Z1C], (12) where
|
https://arxiv.org/abs/2505.21391v1
|
Z1is now the first mcolumns of X. Apparently, we still have 1/∈col(X1). Lemma 14. Let Assumption 3.1 hold. Then A=X1Dπ(Pλ−I)X1,b=X⊤ 1Dπ(rλ−Jπ 1−λ1). Proof. Apply the decomposition shown in Lemma 1, we can get A=(X1+X2)⊤Dπ(Pλ−I)(X1+X2) 13 =X⊤ 1Dπ(Pλ−I)X1+X⊤ 2Dπ(Pλ−I)X1+X⊤ 1Dπ(Pλ−I)X2+X⊤ 2Dπ(Pλ−I)X2 =X⊤ 1Dπ(Pλ−I)X1, where the last equality holds because (Pλ−I)1= 0 and1⊤Dπ(Pλ−I) =d⊤ πPλ−d⊤ π= 0. Similarly, for bwe can obtain b=(X1+X2)⊤Dπ(rλ−Jπ 1−λ1) =X⊤ 1Dπ(rλ−Jπ 1−λ1) +X⊤ 2Dπ(rλ−Jπ 1−λ1) =X⊤ 1Dπ(rλ−Jπ 1−λ1) +θ⊤(d⊤ π(I−λPπ)−1rπ−Jπ 1−λ) =X⊤ 1Dπ(rλ−Jπ 1−λ1) +θ⊤(1 1−λd⊤ πrπ−Jπ 1−λ) =X⊤ 1Dπ(rλ−Jπ 1−λ1). Here, the fourth inequality holds because d⊤ π(I−λPπ) = (1 −λ)d⊤ π, which gives us d⊤ π= (1−λ)d⊤ π(I−λPπ)−1. The last inequality holds since Jπ=d⊤ πrπ. This completes the proof. Lemma 15. Let Assumption 3.1 hold. Then W∗is nonempty. Proof. In view of (11) and(12), we have X1= [Z1Z1C]. Notably, Z1has a full column rank and1/∈col(Z1). Decompose w.= w1 w2 accordingly and recall (4)and Lemma 14, we can rewrite Aw+b= 0as Z⊤ 1 (Z1C)⊤ Dπ(Pλ−I)[Z1Z1C] w1 w2 =−Z⊤ 1Dπ(rλ−Jπ 1−λ1) −(Z1C)⊤Dπ(rλ−Jπ 1−λ1) , which thus gives us the following simultaneous equations ( Z⊤ 1Dπ(Pλ−I)Z1w1+Z⊤ 1Dπ(Pλ−I)Z1Cw2=−Z⊤ 1Dπ(rλ−Jπ 1−λ1) (Z1C)⊤Dπ(Pλ−I)Z1w1+ (Z1C)⊤Dπ(Pλ−I)Z1Cw2=−(Z1C)⊤Dπ(rλ−Jπ 1−λ1). We now prove the claim by constructing a solution. Choose any w2∈ker(Z1C)(e.g., w2= 0), the equations then become ( Z⊤ 1Dπ(Pλ−I)Z1w1=−Z⊤ 1Dπ(rλ−Jπ 1−λ1) C⊤Z⊤ 1Dπ(Pλ−I)Z1w1=−C⊤Z⊤ 1Dπ(rλ−Jπ 1−λ1). Since Z1is full rank and 1/∈Z1, Lemma 7 of Tsitsiklis and Roy [1999] shows Z⊤ 1Dπ(Pλ−I)Z1is n.d. and thus invertible. Choose w1=−(Z⊤ 1Dπ(Pλ−I)Z1)−1Z⊤ 1Dπ(rλ−Jπ 1−λ1)then satisfies the equations. This completes the proof. Lemma 16. Let Assumption 3.1 hold. Then W∗={w∗}+ ker( X1)andker(X1) ={w|Xw=c1, c∈R}. Proof. For any solution w∗, w∗∗∈W∗, according to the definition of W∗in(4), we have Aw∗+b= 0andAw∗∗+b=0. That is A(w∗−w∗∗) =0. By multiplying (w∗−w∗∗)⊤on both side we can get (w∗−w∗∗)⊤X⊤Dπ(Pλ−I)X(w∗−w∗∗) = 0 . According to the Perron-Frobenius theorem with Assumption 3.1, v⊤Dπ(Pλ−I)v= 0if and only ifv=c1for some c∈R. Therefore, we must have X(w∗−w∗∗) =c1for some c∈R. That is, (X1+X2)(w∗−w∗∗) =c1. Recall the definition of X2in(11), we have X2(w∗−w∗∗) = (θ⊤(w∗− 14 w∗∗))1. This means X1(w∗−w∗∗) =c′1withc′=c−θ⊤(w∗−w∗∗). Since 1/∈col(X1), we must havec′= 0. That is, w∗−w∗∗∈ker(X1). Thus, we have established that W∗={w∗}+ ker( X1). Furthermore, if w∈ker(X1), we have Xw= (X1+X2)w= (θ⊤w)1. IfXw=c1, we have X1w=c1−X2w= (c−θ⊤w)1. But 1/∈col(X1). So we must have c−θ⊤w= 0, i.e., w∈ker(X1). This completes the proof of ker(X1) ={w|Xw=c1, c∈R}. C Proofs in Section 5.1 Lemma 17. For sufficiently large t0, there exists a constant C17such that the following statement holds. For any t≥tand any i∈[t−ταt, t], it holds that wi−wt−ταt ≤C17αt−ταt,i−1(∥wi−Γ(wi)∥+ 1). Proof. In this proof, to simplify notations, we define shorthand t1.=t−ταtandCx.= max s∥x(s)∥. Given Lemma 13, we can select a sufficiently large t0such that for any t≥t, exp CA5Cxαt−ταtt−1 <3, CA5Cxαt−ταtt−1<1 6. We then bound ∥wi−wt1∥as ∥wi−wt1∥ ≤i−1X k=t1∥αkH(wk, Yk+1)∥ ≤i−1X k=t1αkCA5(∥Xwk−Xwt1∥+∥Xwt1∥+ 1) (Assumption A5) ≤i−1X k=t1αkCA5(∥Xwt1∥+ 1) +i−1X k=t1αkCA5(∥Xwk−Xwt1∥) ≤i−1X k=t1αkCA5(∥Xwt1∥+ 1) +i−1X k=t1αkC17,1(∥wk−wt1∥) ≤CA5αt1,i−1(∥Xwt1∥+ 1) exp( C17,1αt1,t−1),(Lemma 12) where C17,1.=CA5Cx. We then have ∥wi−wt1∥ ≤CA5αt1,i−1(∥Xwi−Xwt1∥+∥Xwi∥+ 1) exp( C17,1αt1,t−1) ≤CA5Cxexp(C17,1αt1,t−1)αt1,i−1∥wi−wt1∥+ exp( C17,1αt1,t−1)(∥Xwi∥+ 1)CA5αt1,i−1 ≤1 2∥wi−wt1∥+C17,2αt1,i−1(∥Xwi∥+ 1), where C17,2.= 3CA5. Thus, we have ∥wi−wt1∥ ≤2C17,2αt1,i−1(∥Xwi∥+ 1) ≤2C17,2αt1,i−1(C2(∥wi−Γ(wi)∥+ 1) + 1) ≤C17αt1,i−1(∥wi−Γ(wi)∥+ 1), where C17.= 2C17,2(C2+ 1) . This completes the
|
https://arxiv.org/abs/2505.21391v1
|
proof. C.1 Proof of Lemma 2 Proof. ∥Xw∥=∥Xw−XΓ(w) +XΓ(w)∥ ≤ ∥X(w−Γ(w))∥+∥XΓ(w)∥ ≤ ∥X∥∥w−Γ(w)∥+CA5 (Assumption A5) 15 C.2 Proof of Lemma 3 Proof. According to the definition of H(wt, Yt)in (10), ∥H(wt, Yt)∥2 ≤C2 A5(∥Xwt∥+ 1)2(By Assumption A5) ≤2C2 A5(∥Xwt∥2+ 1) ≤2C2 A5(C2 2(∥wt−Γ(wt)∥+ 1)2+ 1) ≤2C2 A5(2C2 2(∥wt−Γ(wt)∥2+ 1) + 1) ≤C3(∥wt−Γ(wt)∥2+ 1), where C3.= 2C2 A5(2C2 2+ 1) . This completes the proof. C.3 Proof of Lemma 4 Proof. We first decompose ⟨wt−Γ(wt), H(wt, Yt)−h(wt)⟩into three components similarly to Srikant and Ying [2019] as ⟨wt−Γ(wt), H(wt, Yt)−h(wt)⟩ =⟨(wt−Γ(wt))−(wt−ταt−Γ(wt−ταt)), H(wt, Yt)−h(wt)⟩ | {z } T1 +⟨wt−ταt−Γ(wt−ταt), H(wt, Yt)−H(wt−ταt, Yt) +h(wt−ταt)−h(wt)⟩ | {z } T2 +⟨wt−ταt−Γ(wt−ταt), H(wt−ταt, Yt)−h(wt−ταt)⟩ | {z } T3. We leverage Lemma 2 and (9) to bound them one by one as follows. Bounding T1: T1≤ (wt−Γ(wt))−(wt−ταt−Γ(wt−ταt)) | {z } T11·∥H(wt, Yt)−h(wt)∥| {z } T12. For the first term, we have T11= wt−Γ(wt)−wt−ταt−Γ(wt−ταt) ≤ wt−wt−ταt + Γ(wt)−Γ(wt−ταt) ≤2 wt−wt−ταt (Since W∗is convex) ≤2C17αt−ταt,t−1(∥wt−Γ(wt)∥+ 1) (Lemma 17) For the second term, we have T12≤CA5(∥Xwt∥+ 1) + CA5(∥Xwt∥+ 1) ≤2CA5(C2(∥wt−Γ(wt)∥+ 1) + 1) ≤C4,1(∥wt−Γ(wt)∥+ 1), where C4,1.= 2CA5(C2+ 1) . Therefore, we can get T1≤2C17C4,1αt−ταt,t−1(∥wt−Γ(wt)∥+ 1)2. Choosing C4,a.= 4C17C4,1then yields the bound T1≤C4,aαt−ταt,t−1(∥wt−Γ(wt)∥2+ 1). Bounding T2: T2=⟨wt−ταt−Γ(wt−ταt), H(wt, Yt)−H(wt−ταt, Yt) +h(wt−ταt)−h(wt)⟩ ≤ wt−ταt−Γ(wt−ταt) | {z } T21· H(wt, Yt)−H(wt−ταt, Yt) +h(wt−ταt)−h(wt)⟩ | {z } T22. 16 For the first term, we have: T21= (wt−ταt−Γ(wt−ταt))−(Γ(wt)−Γ(wt)) ≤ wt−ταt−Γ(wt) + Γ(wt)−Γ(wt−ταt) ≤ wt−ταt−Γ(wt) + wt−wt−ταt ≤ wt−Γ(wt) +wt−ταt−wt + wt−wt−ταt ≤ ∥wt−Γ(wt)∥+ 2 wt−wt−ταt ≤ ∥wt−Γ(wt)∥+ 2C17αt−ταt,t−1(∥wt−Γ(wt)∥+ 1) (Lemma 17) ≤C4,2(∥wt−Γ(wt)∥+ 1).(Lemma 13) (13) For the second term, we have: T22≤ H(wt, Yt)−H(wt−ταt, Yt) + h(wt)−h(wt−ταt) ≤2CA1 wt−ταt−wt ≤C4,3C17αt−ταt,t−1(∥wt−Γ(wt)∥+ 1).(Lemma 17) (14) Combine the result in (13) and (14), we have: T2≤C4,2C4,3C17αt−ταt,t−1(∥wt−Γ(wt)∥+ 1)2. Choosing C4,b.= 2C4,2C4,3C17then yield the bound T2≤C4,bαt−ταt,t−1(∥wt−Γ(wt)∥2+ 1). Bounding T3: T3= wt−ταt−Γ(wt−ταt), H(wt−ταt, Yt)−h(wt−ταt) . Take expectation on both sides, we can get E[T3] =E ⟨wt−ταt−Γ(wt−ταt), H(wt−ταt, Yt)−h(wt−ταt)⟩ =Eh E ⟨wt−ταt−Γ(wt−ταt), H(wt−ταt, Yt)−h(wt−ταt)⟩ wt−ταt Yt−ταti =Eh ⟨wt−ταt−Γ(wt−ταt),Eh H(wt−ταt, Yt)−h(wt−ταt) wt−ταt Yt−ταti ⟩i ≤E wt−ταt−Γ(wt−ταt) | {z } T31· Eh H(wt−ταt, Yt)−h(wt−ταt)|wt−ταt Yt−ταti | {z } T32 . We have T32≤αt( Xwt−ταt + 1) (By (7) and (9)) ≤αt( Xwt−ταt−Xwt +∥Xwt∥+ 1) ≤αt( Xwt−ταt−Xwt +C2(∥wt−Γ(wt)∥+ 1) + 1) ≤αt(C17Cxαt−ταt,t−1(∥wt−Γ(wt)∥+ 1) + C2∥wt−Γ(wt)∥+C2+ 1) ≤C4,4αt(∥wt−Γ(wt)∥+ 1). Thus, together with (13), we obtain E[T3]≤C4,cαt(∥wt−Γ(wt)∥2+ 1), where C4,c.=C4,2C4,4. Finally, denote C4.=C4,a+C4,b+C4,cthen completes the proof. 17 C.4 Proof of Lemma 5 Proof. We recall that ∥wt−Γ(wt)∥2= 2L(wt). Aligning Assumption A4, Lemmas 3 and 4 with (8), we get E[L(wt+1)] ≤(1−CA4αt)E[L(wt)] +C4αtαt−ταt,t−1(2E[L(wt)] + 1) +C3 2α2 t+C3α2 tE[L(wt)] ≤ 1−2CA4αt+ 2C4αtαt−ταt,t−1+C3α2 t E[L(wt)] +C4αtαt−ταt,t−1+C3 2α2 t. Furthermore, we aim to derive an upper bound for E[L(wt)]that depends on the initial expected loss E[L(w0)]and decreases over time. First, let’s denote the coefficients as CtandDt: Ct.= 1−2CA4αt+ 2C4αtαt−ταt,t−1+C3α2 t, Dt.=C4αtαt−ταt,t−1+C3 2α2 t. For sufficiently large t0andt≥t, we obtain 4C4αt−ταt,t−1+C3αt< C A4. Thus, the recursive inequality further becomes: E[L(wt+1)]≤(1−CA4αt)E[L(wt)] +Dt, where Dt=O(αtαt−ταt,t−1). C.5 Proof of Theorem 3 Proof. To express E[L(wt)]in terms of E[L(w0)], we recursively apply the inequality: E[L(wt)]≤tY i=t(1−CA4αi)E[L(wt)] +tX j=t tY i=j+1(1−CA4αi) Dj. Denote E1.=Qt i=t(1−CA4αi)E[L(wt)],E2.=Pt j=tQt i=j+1(1−CA4αi) ln(j+t0) (j+t0)2ξ, and κ= CA4α. Recall we have αt=α
|
https://arxiv.org/abs/2505.21391v1
|
(t+t0)ξ. For E1, sett0> κ=CA4α, we have tY i=t(1−CA4αi)E[L(wt)] =tY i=t 1−CA4α (i+t0)ξ E[L(wt)] ≤tY i=t 1−κ i+t0 E[L(wt)] =E[L(wt)]tY i=ti+t0−κ i+t0 ≤E[L(wt)]t+t0 t+t0−κ⌊κ⌋ . ForE2, we have E2=tX j=t tY i=j+1i+t0−κ i+t0 ln(j+t0) (j+t0)2ξ =t−⌊κ⌋X j=t tY i=j+1i+t0−κ i+t0 ln(j+t0) (j+t0)2ξ+tX j=t−⌊κ⌋+1 tY i=j+1i+t0−κ i+t0 ln(j+t0) (j+t0)2ξ 18 ≤t−⌊κ⌋X j=tj+ 1 + t0 t+t0−κ⌊κ⌋ln(j+t0) (j+t0)2ξ+⌊κ⌋ln(t+t0) (t− ⌊κ⌋+ 1 + t0)2ξ ≤ln(t+t0) (t+t0−κ)⌊κ⌋CThm3,1t−⌊κ⌋X j=t(j+t0)⌊κ⌋−2ξ+⌊κ⌋ln(t+t0) (t−κ+ 1 + t0)2ξ Case 1: ⌊κ⌋ −2ξ >0 E2≤ln(t+t0) (t+t0−κ)⌊κ⌋CThm3,2 (t− ⌊κ⌋+t0)⌊κ⌋−2ξ+1+⌊κ⌋ln(t+t0) (t−κ+ 1 + t0)2ξ ≤ln(t+t0) (t+t0−κ)2ξ−1CThm3,3 +⌊κ⌋ln(t+t0) (t−κ+ 1 + t0)2ξ ≤CThm3,4ln(t+t0) (t+t0)2ξ−1 . Case 2: ⌊κ⌋ −2ξ≤0 E2≤ln(t+t0) (t+t0−κ)⌊κ⌋CThm3,1 (t− ⌊κ⌋+ 1) + ⌊κ⌋ln(t+t0) (t−κ+ 1 + t0)2ξ ≤ln(t+t0) (t+t0−κ)⌊κ⌋−1CThm3,5 +⌊κ⌋ln(t+t0) (t−κ+ 1 + t0)2ξ ≤CThm3,6ln(t+t0) (t+t0)⌊κ⌋−1 . Starting from the update of wt+1, we have ∥wt+1∥ ≤ ∥ wt∥+αt∥H(wt, Yt+1)∥ ≤ ∥ wt∥+αtCA1(∥wt∥+ 1). That is, ∥wt+1∥ ≤α0CA1+Pt i=0(α0CA1+ 1)∥wi∥. Applying discrete Gronwall inequality, we obtain ∥wt∥ ≤(CA1+∥w0∥) expPt−1 t=0(1 +α0CA1) = (CA1+∥w0∥) exp t+tα0CA1 . Denoting CThm3,1.= exp 2t+ 2tα0CA1 andCThm3,2.= 2 max( CThm3,4 , CThm3,6 )then completes the proof. D Proofs in Section 5.2 D.1 Proof of Lemma 6 Proof. Lety= (s, a, s′, e)∈ Y andCx.= max s∥x(s)∥. We have ∥H(w, y)−H(w′, y)∥= e(γx(s′)⊤−x(s)⊤)(w−w′) ≤2CxCe∥w−w′∥. Furthermore, sup y∈Y∥H(0, y)∥= sup y∈Y∥r(s, a)e∥ ≤max s,a|r(s, a)|Ce, which completes the proof. D.2 Proof of Lemma 18 Lemma 18. There exist a constant C18andτ∈[0,1)such that ∀w ∥E[H(w, Y t+n)|Yt]−h(w)∥ ≤C18τn(∥Xw∥+ 1). Proof. Given the Markov property, we only need to prove the case of t= 1. Recall that we use y= (s, a, s′, e). Define shorthand δ((s, a, s′), w).=r(s, a) +γx(s′)⊤w−x(s)⊤w, 19 δn+1(w).=δ((Sn, An, Sn+1), w). By (10), we can get H(w, Y n+1) =δn+1(w)en. By expanding en, we get E[H(w, Y n+1)|Y1] =E[δn+1(w)en|Y1] =E" δn+1(w)nX k=0(γλ)n−kx(Sk)|S0# . Now define a two-sided Markov chain¯St,¯At t=...,−2,−1,0,1,2,...such that Pr ¯St=s = dπ(s),Pr ¯At=a|¯St=s =π(a|s), i.e., the new chain always stay in the stationary distribution of the original chain. Similarly, define ¯δn+1(w).=δ((¯Sn,¯An,¯Sn+1), w). We then have E" δn+1(w)nX k=0(γλ)n−kx(Sk)|S0# =E" ¯δn+1(w)nX k=−∞(γλ)n−kx(¯Sk)# | {z } f0(n) +E" δn+1(w)nX k=0(γλ)n−kx(Sk)|S0# −E" ¯δn+1(w)nX k=0(γλ)n−kx(¯Sk)# | {z } f1(n) −E" ¯δn+1(w)−1X k=−∞(γλ)n−kx(¯Sk)# | {z } f2(n). In the proof of Lemma 6.7 of Bertsekas and Tsitsiklis [1996], it is proved that f0(n) =Aw+b, which coincides with h(w). Thus the rest of the proof is dedicated to proving that f1(n)andf2(n) decay geometrically. For f2(n), we have ¯δn+1(w)x(¯Sk) ≤C18,1(∥Xw∥+ 1) for some C18,1 (cf. (16)). We then have ∥f2(n)∥ ≤C18,1(∥Xw∥+ 1)−1X k=−∞(γλ)n−k =C18,1(∥Xw∥+ 1)( γλ)n∞X k=1(γλ)k. Forf1(n), since {St}adopts geometric mixing, there exists some τ1∈[0,1)andC18,2>0such that X s Pr(Sk=s)−Pr ¯Sk=s ≤C18,2τk 1. Then we have E[δn+1(w)x(Sk)|S0]−E¯δn+1(w)x(¯Sk) =X sPr(Sk=s|S0)x(Sk)E[δn+1(w)|Sk=s]−X sdπ(s)x(¯Sk)E¯δn+1(w)|¯Sk=s . 20 Noticing that E[δn+1(w)|Sk=s] =E¯δn+1(w)|¯Sk=s due to the Markov property, we obtain E[δn+1(w)x(Sk)|S0]−E¯δn+1(w)x(¯Sk) ≤C18,2τk 1C18,1(∥Xw∥+ 1). This means ∥f2(n)∥ ≤C18,2C18,1(∥Xw∥+ 1)nX k=0(γλ)n−kτk 1. Noticing that nX k=0(γλ)n−kτk 1≤nmax{γλ, τ 1}n then completes the proof. D.3 Proof of Lemma 7 Proof. We start with proving ∀w∈ker(A)⊥, w⊤Aw≤ −C7∥w∥2. This is apparently true if w=0. Now fix any w∈ker(A)⊥andw̸=0, which implies that Aw̸=0. Now we prove by contradiction that w⊤Aw̸= 0. Otherwise, if w⊤Aw= 0, we have w⊤X⊤Dπ(γPλ−I)Xw= 0. Since Dπ(γPλ−I)is n.d., we then
|
https://arxiv.org/abs/2505.21391v1
|
get Xw=0, further implying Aw=0, which is a contradiction. We have now proved that w⊤Aw̸= 0. We next prove that w⊤Aw < 0. This is from the fact that A is n.d., i.e., for ∀z∈Rd, z⊤Az≤0. But w⊤Aw̸= 0. So we must have w⊤Aw < 0. Finally, we use an extreme theorem argument to complete the proof. Define Z.= w|w∈ker(A)⊥,∥w∥= 1 . Because z∈Zimplies z∈ker(A)⊥andz̸= 0, we have ∀z∈Z, z⊤Az < 0. Since Zis clearly compact, the extreme value theorem confirms that the function z7→z⊤Azobtains its minimum value inZ, denoted as −C7<0, i.e., we have ∀z∈Z, z⊤Az≤ −C7. (15) For any w∈ker(A)⊥andw̸=0, we havew ∥w∥∈Z, sow⊤Aw≤ −C7∥w∥2, which completes the proof of the first part. We now prove that ∀w∈Rd, w−Γ(w)∈ker(A)⊥. We recall that Γis the orthogonal projection to W∗={w|Aw+b= 0}. Since Γis the orthogonal projection to W∗, we know w−Γ(w)∈W⊥ ∗. Fix any w∗∈W∗and let z∈ker(A), we then have A(w∗+z) +b=0sow∗+z∈W∗. We then have ⟨w−Γ(w), z⟩=⟨w−Γ(w), w∗+z⟩ − ⟨w−Γ(w), w∗⟩= 0−0 = 0 , confirming that w−Γ(w)∈ker(A)⊥, which completes the proof. D.4 Proof of Lemma 8 Proof. Lety= (s, a, s′, e)∈ Y, since x(s)⊤w ≤max s∈S x(s)⊤w ≤ ∥Xw∥, according to (10), we have ∥H(w, y)∥= e r(s, a) +γx(s′)⊤w−x(s)⊤w (16) ≤Ce(|r(s, a)|+γ x(s′)⊤w + x(s)⊤w ) ≤Ce(CR+ (γ+ 1)∥Xw∥) ≤C8(∥Xw∥+ 1), where C8.=Ce(CR+γ+ 1) . For∥h(w)∥, we have ∥h(w)∥=∥Ey∼dY[H(w, y)]∥ ≤Ey∼dY[∥H(w, y)∥]≤C8(∥Xw∥+ 1), which completes the proof. 21 E Proofs in Section 5.3 E.1 Proof of Lemma 9 Proof. The update ton ˆJto in (Average Reward TD) is ˆJt+1=ˆJt+αt cβRt+1−cβˆJt . This matches the first row of eA(Yt)ewt+eb(Yt) =−cβ 0 −ΠetΠet(x(St+1)⊤−x(St)⊤)ˆJt Πwt + cβRt+1 Rt+1Πet . Now consider the update for wt wt+1=wt+αt Rt+1−ˆJt+x(St+1)⊤wt−x(St)⊤wt et. Applying the projection matrix Πon both sides yields Πwt+1−Πwt=αtΠ Rt+1−ˆJt+x(St+1)⊤wt−x(St)⊤wt et = Rt+1−ˆJt+x(St+1)⊤wt−x(St)⊤wt Πet = Rt+1−ˆJt+x(St+1)⊤Πwt−x(St)⊤Πwt Πet. To see the last equality, we recall Lemma 1 and recall Π =X† 1X1. We then have XΠw=X1Πw+1θ⊤Πw =X1w+1θ⊤Πw. This means that x(s′)⊤Πw−x(s)⊤Πw=x1(s′)⊤w−x1(s)⊤w, where we use x1(s)to denote the s-th row of X1. We also have x(s′)⊤w−x(s)⊤w=(x1(s′) +θ)⊤w−(x(s) +θ)⊤w =x1(s′)⊤w−x1(s)⊤w, which confirms the last equality and then completes the proof. E.2 Proof of Lemma 19 Lemma 19. eAΓ(ew) +eb=0 Proof. According to the definition of Γ(ew),Γ(ew)∈fW∗.= Jπ Πw w∈W∗ . We have eA=Ey∼dYh eA(y)i =E(s,a,s′,e)∼dY−cβ 0 −ΠeΠ e(x(s′)⊤−x(s)⊤) =−cβ 0 −ΠEdY[e] ΠA , eb=Ey∼dYh eb(y)i =E(s,a,s′,e)∼dY cβr(s, a) r(s, a)Πe =cβJπ ΠEdY[e]Jπ+ Πb . (17) Therefore, for the first row of eAΓ(ew) +eb, we get cβ(Jπ−Jπ) = 0 . For the second row, we can get −ΠEdY[e]Jπ+ ΠAΠw+ ΠEdY[e]Jπ+ Πb =Π(AΠw+b) =Π(X⊤ 1Dπ(Pλ−I)X1Πw+b) =Π(X⊤ 1Dπ(Pλ−I)X1w+b) =Π(Aw+b) =0, where the second equality comes with the definition of Π. This completes the proof. 22 E.3 Proof of Lemma 10 Proof. Ifz= 0, the lemma trivially holds. So now let Let z= z1 z2 ∈R×ker(X1)⊥,z̸= 0. With (17), we have eA=−cβ 0 −ΠE(s,a,s′,e)∼dY[e] ΠA =−cβ 0 −ΠEdY[e] ΠX⊤ 1Dπ(Pλ−I)X1 (Lemma 14) . For simplicity, define q.=EdY[e], B.=X⊤ 1Dπ(Pλ−I)X1. We then have z⊤eAz= z1z⊤ 2 −cβz1 Π(−qz1+Bz2) =−cβz2 1+z⊤ 2Π(−qz1+Bz2). Recall that Π =X† 1X1and it is symmetric, we can get z⊤ 2Π(−qz1+Bz2) = (Π z2)⊤(−qz1+Bz2) =z⊤ 2(−qz1+Bz2), where the last
|
https://arxiv.org/abs/2505.21391v1
|
equality holds because z2∈ker(X⊥ 1). Thus, z⊤eAz=−cβz2 1−z⊤ 2qz1+z⊤ 2Bz2. We now characterize z⊤ 2Bz2. Apparently, z⊤ 2Bz2≤0always holds because Dπ(Pλ−I)is n.s.d. In view of (5), the equality holds only if X1z2=c1. But 1/∈col(X1)andz2∈ker(X1)⊥. So the equality holds only when z2= 0. Now we have proved that ∀z2∈ker(X1)⊥, z2̸= 0, it holds thatz⊤ 2Bz2<0. Using the normalization trick and the extreme value theorem again (cf. (15)), we confirm that there exists some constant C10,1>0such that ∀z2∈ker(X1)⊥, z⊤ 2Bz2≤ −C10,1∥z2∥2. Since z̸= 0, we now discuss two cases. Case 1: z1= 0, z2̸= 0.In this case, we have z⊤eAz=z⊤ 2Bz2<0. Case 2: z1̸= 0.In this case, we have z⊤eAz=−cβz2 1+z1z⊤ 2q+z⊤ 2Bz2≤ −cβz2 1+|z1|∥z2∥∥q∥ −C10,1∥z2∥2. By completing squares, it is easy to see that when cβis sufficiently large (depending on ∥q∥and C10,1), it holds z⊤eAz < 0because z1̸= 0. Combining both cases, we have proved that ∀z∈R×ker(X1)⊥, z̸=0, it holds that z⊤eAz < 0. Using the normalization trick and the extreme value theorem again (cf. (15)) then completes the proof. E.4 Proof of Lemma 11 Proof. By definition, fW∗= Jπ Πw w∈W∗ . In view of Lemma 16, let w∗be any fixed vector inW∗. Then any ew∗∈fW∗can be written as ew∗= Jπ Π(w∗+w0) with some w0∈ker(X1). We then have eXew∗= Jπ XΠ(w∗+w0) = Jπ XΠw∗ , where the last equality holds because Πis the orthogonal projection to ker(X1)⊥. This means that eXew∗is a constant regardless of ew∗, which completes the proof. 23 E.5 Proof of Lemma 20 Lemma 20. (ˆJt−Jπ)2+d(wt,W∗)2=d(ewt,fW∗)2. Proof. We recall that Πis the orthogonal projection to ker(X1)⊥. LetΠ′be the orthogonal projection toker(X1). We recall from Lemma 16 that W∗={w∗}+ ker( X1)withw∗being any fixed point in W∗. Thus for any w∗∈W∗, we can write it as w∗+w0with some w0∈ker(X1). Then for any w∈Rd, we have d(w,W∗)2= inf w∗∈W∗∥w−w∗∥2 = inf w0∈ker(X1)∥w−w∗−w0∥2 = inf w0∈ker(X1)∥Πw+ Π′w−Πw∗−Π′w∗−w0∥2 = inf w0∈ker(X1)∥Πw−Πw∗∥2+∥Π′w−Π′w∗−w0∥2 =∥Πw−Πw∗∥2, where the last equality holds because we can select w0= Π′w−Π′w∗. Define ΠW∗.= Πw|w∈W∗ . Then we have d(Πw,ΠW∗) = inf w∗∈W∗∥Πw−Πw∗∥ = inf w0∈ker(X1)∥Πw−Π(w∗+w0)∥ =∥Πw−Πw∗∥, where the last equality holds because w0∈ker(X1)andΠis the projection to ker(X1)⊥soΠw0= 0. We now have ∀w, d(w,W∗) =d(Πw,ΠW∗). Then we have d(ewt,fW∗)2 =(ˆJt−Jπ)2+d(ewt,ΠW∗)2 =(ˆJt−Jπ)2+d(Πwt,ΠW∗)2 =(ˆJt−Jπ)2+d(wt,W∗)2, which completes the proof. F Details of Experiments We use a variant of Boyan’s chain [Boyan, 1999] with 15 states ( s0, s1, . . . , s 14) and 5 actions (a0, . . . , a 4). The chain has deterministic transitions. For s2, . . . , s 14, the action a0goes to si−1and the actions a1toa4go to si−2;s1always transitions to s0;s0transitions uniformly randomly to any state. The reward function is r(s, a) =1ifs=s0 0otherwise. 24 We use a uniform random policy π(a|s) = 0 .5. The feature matrix X∈R15×5is designed to be of rank 3. X= 0.07 0 .11 0 .18 0 .14 0 .61 0.13 0 .19 0 .32 0 .26 0 .45 0.11 0 .17 0 .28 0 .22 0 .39 0.24 0 .36 0 .60 0 .48 0 .84 0.18 0 .28 0 .46 0 .36 1 .00 0.20 0 .30 0 .50 0
|
https://arxiv.org/abs/2505.21391v1
|
.40 1 .06 0.31 0 .47 0 .78 0 .62 1 .45 0.29 0 .45 0 .74 0 .58 1 .39 0.42 0 .64 1 .06 0 .84 1 .84 0.40 0 .62 1 .02 0 .80 1 .78 0.47 0 .73 1 .20 0 .94 2 .39 0.53 0 .81 1 .34 1 .06 2 .23 0.58 0 .9 1 .48 1 .16 2 .78 0.60 0 .92 1 .52 1 .20 2 .84 0.67 1 .03 1 .70 1 .34 3 .45 Each experiment runs for 1.5×106steps, averaged over 10runs. These experiments were con- ducted on a server equipped with an AMD EPYC 9534 64-Core Processor, with each run taking approximately 1minute to complete. Memory requirements are negligible. 25
|
https://arxiv.org/abs/2505.21391v1
|
arXiv:2505.21393v1 [cs.LG] 27 May 2025Leveraging the Power of Conversations: Optimal Key Term Selection in Conversational Contextual Bandits Maoli Liu The Chinese University of Hong Kong Hong Kong, China mlliu@cse.cuhk.edu.hkZhuohua Li∗† Guangzhou Institute of Technology, Xidian University Guangzhou, Guangdong, China zhli@cse.cuhk.edu.hk Xiangxiang Dai The Chinese University of Hong Kong Hong Kong, China xxdai23@cse.cuhk.edu.hkJohn C.S. Lui The Chinese University of Hong Kong Hong Kong, China cslui@cse.cuhk.edu.hk Abstract Conversational recommender systems proactively query users with relevant “ key terms ” and leverage the feedback to elicit users’ pref- erences for personalized recommendations. Conversational con- textual bandits, a prevalent approach in this domain, aim to op- timize preference learning by balancing exploitation and explo- ration. However, several limitations hinder their effectiveness in real-world scenarios. First, existing algorithms employ key term selection strategies with insufficient exploration, often failing to thoroughly probe users’ preferences and resulting in suboptimal preference estimation. Second, current algorithms typically rely on deterministic rules to initiate conversations, causing unneces- sary interactions when preferences are well-understood and missed opportunities when preferences are uncertain. To address these limitations, we propose three novel algorithms: CLiSK, CLiME, and CLiSK-ME. CLiSK introduces smoothed key term contexts to enhance exploration in preference learning, CLiME adaptively initi- ates conversations based on preference uncertainty, and CLiSK-ME integrates both techniques. We theoretically prove that all three algorithms achieve a tighter regret upper bound of O(√︁ 𝑑𝑇log𝑇) with respect to the time horizon 𝑇, improving upon existing meth- ods. Additionally, we provide a matching lower bound Ω(√ 𝑑𝑇) for conversational bandits, demonstrating that our algorithms are nearly minimax optimal. Extensive evaluations on both synthetic and real-world datasets show that our approaches achieve at least a 14.6% improvement in cumulative regret. CCS Concepts •Information systems →Recommender systems ;•Theory of computation→Online learning algorithms ;Online learning theory . ∗Zhuohua Li is the corresponding author. †Also with The Chinese University of Hong Kong. This work is licensed under a Creative Commons Attribution 4.0 International License. KDD ’25, Toronto, ON, Canada ©2025 Copyright held by the owner/author(s). ACM ISBN 979-8-4007-1454-2/2025/08 https://doi.org/10.1145/3711896.3737025Keywords Conversational Recommendation, Preference Learning, Contextual Bandits, Online Learning ACM Reference Format: Maoli Liu, Zhuohua Li, Xiangxiang Dai, and John C.S. Lui. 2025. Leveraging the Power of Conversations: Optimal Key Term Selection in Conversational Contextual Bandits. In Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining V.2 (KDD ’25), August 3–7, 2025, Toronto, ON, Canada. ACM, New York, NY, USA, 17 pages. https://doi.org/ 10.1145/3711896.3737025 KDD Availability Link: The source code of this paper has been made publicly available at https: //doi.org/10.5281/zenodo.15490021. 1 Introduction Recommender systems play a crucial role in applications like movie recommendations, online advertising, and personalized news feeds, where providing relevant and engaging content is essential for user satisfaction. To cater to diverse user interests, recommender sys- tems are designed to interact with users and continuously learn from their feedback. For instance, in product and news recommen- dations, the system can monitor users’ real-time click rates and accordingly refine its recommendations. Modern recommender sys- tems incorporate advanced online learning techniques to adapt in real time and uncover previously unknown user preferences. A fundamental
|
https://arxiv.org/abs/2505.21393v1
|
challenge in recommender systems is the trade- off between exploration (i.e., recommending new items to uncover users’ unknown preferences) and exploitation (i.e., recommending items that align with users’ historical preferences). Contextual ban- dits [ 15] address this trade-off by enabling the system to learn from user interactions continuously while optimizing recommendations without compromising the user experience. In this framework, each item to be recommended is treated as an “ arm”, represented by a fea- ture vector. At each round, the agent (i.e., the recommender system) recommends an arm to the user based on historical interactions and the context of each arm, and then receives feedback/rewards (e.g., clicks). The objective of the algorithm executed by the agent is to design an arm recommendation strategy that maximizes cumulative reward (or equivalently, minimizes cumulative regret) over time. Another major challenge in recommender systems is the “ cold start” problem, where the system initially lacks sufficient data about KDD ’25, August 3–7, 2025, Toronto, ON, Canada Maoli Liu, Zhuohua Li, Xiangxiang Dai, and John C.S. Lui def phase_elimination(num_arms, num_rounds, arms_means): remaining_arms = list(range(num_arms)) rounds_per_phase = num_rounds // int(np.log2(num_rounds)) for phase in range(int(np.log2(num_rounds))): for arm in remaining_arms: . . . return remaining_arms[0] . . . print(f"The best arm identified is: {best_arm}")python class PhaseElimination: def _ _init_ _(self, num_arms, num_rounds, arms_means): . . . self.remaining_arms = list(range(num_arms)) def run(self): rounds_per_phase = self.num_rounds // int(np.log2(self.num_rounds)) . . . return self.remaining_arms[0] . . . print(f"The best arm identified is: {best_arm}")python Response 1 Phase elimination algorithm is implemented as a function: Response 2 Phase elimination algorithm is implemented as a class:Which response do you prefer? Your choice will help make ChatGPT better. Figure 1: Illustration of conversational recommendation by ChatGPT, where users select their preferred response from presented options. new users’ preferences, making accurate recommendations difficult. Conversational recommender systems (CRSs) [ 5,11,23,30] have emerged as a promising solution. Unlike traditional systems that rely solely on feedback from recommended items, CRSs can actively initiate queries with users to collect richer feedback and quickly infer their preferences. For example, as shown in Figure 1, platforms like ChatGPT occasionally present users with multiple response options and allow them to select their preferred one. Through these interactions, ChatGPT can refine its understanding and improve fu- ture responses to better align with user preferences. To model these interactions, conversational contextual bandits [ 29] are proposed as a natural extension of contextual bandits. In this framework, besides recommending items (arms) and observing arm-level feed- back, the agent can proactively prompt users with questions about key terms and receive key term-level feedback. The key terms are related to a subset of arms, providing valuable insights into users’ preferences and improving recommendation quality. Despite recent advances in conversational contextual bandits [ 25, 27, 28], existing approaches still face the following limitations: •Insufficient Exploration in Key Term Selection : Existing studies about conversational bandits fail to sufficiently explore key terms, limiting their effectiveness in preference learning. Zhang et al . [29] introduce the ConUCB algorithm with a re- gret upper bound of O(𝑑√ 𝑇log𝑇), where𝑑is the dimension and𝑇is the number
|
https://arxiv.org/abs/2505.21393v1
|
of rounds. However, despite incorporating additional queries about key terms, the method does not yield substantial improvement over non-conversational approaches. Since then, improving regret through conversational interactions has remained an open problem in the field. Wang et al . [25] and Yang et al . [28] introduce an additional assumption that the key term set spans R𝑑and propose the ConLinUCB-BS and ConDuel algorithms, respectively. The two algorithms reduce a√︁ log𝑇 term in the regret, but worsen the dependence on 𝑑(as discussed in Section 4.4), resulting in a suboptimal regret bound. To achieve optimal regret, more explorative key term selection strategies are needed to efficiently gather informative user feedback and improve learning efficiency. •Inflexible Conversation Mechanism : Existing conversational bandit algorithms [ 27,29] often use a deterministic function tocontrol the frequency of conversations. Specifically, the agent can only initiate 𝑄conversations at once per 𝑃rounds, where 𝑃 and𝑄are fixed integers. However, this rigid approach is imprac- tical and insufficient in real-world scenarios. For example, in a music streaming service, a fixed-frequency approach may cause unnecessary interactions when users’ preferences are already well-understood, disrupting the listening experience. Conversely, it may fail to collect feedback when the uncertainty is high, lead- ing to suboptimal recommendations. To address these limitations, a more adaptive conversation mechanism is needed to adjust the interaction frequency based on the preference uncertainty. Motivated by these observations, we develop three algorithms aimed at improving conversational contextual bandits. To start, we introduce the concept of “ smoothed key term contexts ”, inspired by the smoothed analysis for contextual bandits [ 13], and propose theConversational LinUCB with Smoothed Key terms (CLiSK) algorithm. Specifically, CLiSK launches conversations at a fixed frequency, similar to Zhang et al . [29] , but greedily selects key terms that are slightly perturbed by Gaussian noise. For example, in movie recommendations, instead of asking directly about a genre like “comedy” or “drama”, CLiSK blends elements of related genres, such as “comedy-drama” or “dark comedy”. This approach helps the system explore users’ preferences in a more nuanced manner. We will show that these small perturbations have strong theoretical implications , allowing the agent to explore the feature space more effectively and speed up the learning process. We next develop the Conversational LinUCB with Minimum Eigenvalues (CLiME) algorithm, which introduces an adaptive con- versational mechanism driven by preference uncertainty. Unlike the fixed-frequency approach of Wang et al . [25] , Zhang et al . [29] , CLiME assesses preference uncertainty and initiates conversations only when the uncertainty is high, thereby maximizing information gain while avoiding unnecessary interactions. When a conversa- tion is triggered, CLiME selects key terms that target the areas of highest uncertainty within the feature space, rapidly refining user preferences. This adaptive approach not only ensures that conversations are timely and relevant, but also improves the user experience. Additionally, we design a family of uncertainty check- ing functions to determine when to assess the uncertainty, offering greater flexibility and better alignment with diverse applications. The smoothed key term contexts approach in CLiSK and the adaptive conversation technique in CLiME are orthogonal,
|
https://arxiv.org/abs/2505.21393v1
|
allow- ing them to be applied independently or in combination. Therefore, we further propose the CLiSK-ME algorithm, which integrates both techniques to maximize exploration efficiency and adaptively ad- just user interactions. By leveraging the strengths of both methods, CLiSK-ME enhances exploration efficiency and optimizes user in- teractions for improved preference learning. Our algorithms introduce advanced key term selection strategies, significantly enhancing the efficiency of conversational contextual bandits. Theoretically, we prove that CLiSK achieves a regret upper bound ofO(√︁ 𝑑𝑇log𝑇+𝑑), while CLiME and CLiSK-ME achieve a regret upper bound of O(√︁ 𝑑𝑇log𝑇). Notably, all three algorithms reduce the dependence on 𝑇by a factor of√ 𝑑compared to prior studies. To the best of our knowledge, our work is the first to achieve the eO(√ 𝑑𝑇)regret in the conversational bandit literature. In Leveraging the Power of Conversations: Optimal Key Term Selection in Conversational Contextual Bandits KDD ’25, August 3–7, 2025, Toronto, ON, Canada addition, we establish a matching lower bound of Ω(√ 𝑑𝑇), showing that our algorithms are minimax optimal up to logarithmic factors. In summary, our contributions are listed as follows. •We propose three novel conversational bandit algorithms: CLiSK with smoothed key term contexts, CLiME with an adaptive con- versation mechanism, and CLiSK-ME, which integrates both for improved preference learning. •We establish the minimax optimality of our algorithms by prov- ing regret upper bounds of O(√︁ 𝑑𝑇log𝑇+𝑑)for CLiSK and O(√︁ 𝑑𝑇log𝑇)for CLiME and CLiSK-ME, along with a matching lower bound of Ω(√ 𝑑𝑇). These results underscore the theoretical advancements achieved by our methods. •We conduct extensive evaluations on both synthetic and real- world datasets, showing that our algorithms reduce regret by over 14.6% compared to baselines. 2 Problem Formulation In conversational contextual bandits, an agent interacts with a user over𝑇∈N+rounds. The user’s preferences are represented by a fixed but unknown vector 𝜽∗∈R𝑑, where𝑑is the dimension. The agent’s goal is to learn 𝜽∗to recommend items that align with the user’s preferences. There exists a finite arm set denoted by A, where each arm 𝑎∈A represents an item and is associated with a feature vector 𝒙𝑎∈R𝑑. We denote[𝑇]={1,2,...,𝑇}. At each round𝑡∈[𝑇], the agent is given a subset of arms A𝑡⊆A . The agent then selects an arm 𝑎𝑡∈A𝑡and receives a reward 𝑟𝑎𝑡,𝑡. The reward is assumed to be linearly related to the preference vector and the feature vector of the arm, i.e., 𝑟𝑎𝑡,𝑡=𝒙⊤𝑎𝑡𝜽∗+𝜂𝑡, where𝜂𝑡 is a random noise term. Let𝑎∗ 𝑡be the optimal arm at round 𝑡, i.e.,𝑎∗ 𝑡=arg max𝑎∈A𝑡𝒙⊤𝑎𝜽∗. The agent’s objective is to minimize the cumulative regret, which is defined as the total difference between the rewards of the optimal arms and the rewards obtained by the agent, i.e., R(𝑇)=𝑇∑︁ 𝑡=1 𝒙⊤ 𝑎∗ 𝑡𝜽∗−𝒙⊤ 𝑎𝑡𝜽∗ . Beyond observing the user’s preference information through arm recommendations, the agent can gather additional feedback by launching conversations involving key terms. Specifically, a “key term” represents a category or keyword associated with a subset of arms. For example, in movie recommendations, key terms might include genres like “comedy” or “thriller”, and themes such as “romance” or “sci-fi”. Let Kdenote the finite set of key terms, where each key
|
https://arxiv.org/abs/2505.21393v1
|
term 𝑘∈K corresponds to a context vector ˜𝒙𝑘∈R𝑑. At round𝑡, if a conversation is initiated, the agent selects a key term𝑘∈K, queries the user, and receives key-term level feedback ˜𝑟𝑘,𝑡. We follow the formulation of Wang et al . [25] that the user’s preference vector 𝜽∗remains consistent across both arms and key terms. The relationship between key terms and the user’s preference is also linear, i.e., ˜𝑟𝑘,𝑡=˜𝒙⊤ 𝑘𝜽∗+˜𝜂𝑡, where ˜𝜂𝑡is a random noise term. We list and explain our assumptions as follows. Both Assump- tions 1 and 2 are consistent with previous works on conversational contextual bandits [25, 29] and linear contextual bandits [1, 15]. Assumption 1. We assume that the feature vectors for both arms and key terms are normalized, i.e., ∥𝒙𝑎∥2=1and∥˜𝒙𝑘∥2=1for all𝑎∈A and𝑘∈K. We also assume the unknown preference vector 𝜽∗is bounded, i.e.,∥𝜽∗∥2≤1. Assumption 2. We assume the noise terms 𝜂𝑡,˜𝜂𝑡are conditionally independent and 1-sub-Gaussian across 𝑇rounds. 3 Algorithm Design In this section, we introduce our proposed algorithms, outlining their key components and implementation details. 3.1 CLiSK Algorithm To enhance the exploration of users’ preferences, we introduce the smoothed key term contexts and propose the CLiSK algorithm, de- tailed in Algorithm 1. The algorithm consists of two main modules: key term selection (Lines 4 to 10) and arm selection (Lines 11 to 16). Specifically, in each round 𝑡, the agent first determines whether to initiate a conversation based on a predefined query budget (Lines 2 and 3). If a conversation is initiated, the agent selects a key term 𝑘(Line 5) and queries the user about it. Subsequently, the agent updates its estimate of the preference vector 𝜽𝑡(Line 11) and selects an arm𝑎𝑡for recommendation (Line 12). The strategies for key term selection and arm selection are elaborated as follows. Algorithm 1: CLiSK Input:A,K,𝑏(𝑡),𝜆,{𝛼𝑡}𝑡>0 Initialization: 𝑴1=𝜆𝑰𝑑,𝒃1=0𝑑 1for𝑡=1,...,𝑇 do 2𝑞𝑡=⌊𝑏(𝑡)⌋−⌊𝑏(𝑡−1)⌋ 3 while𝑞𝑡>0do 4 Smooth the key term contexts to get {˜˜𝒙𝑘}𝑘∈K 5 Select a key term 𝑘=arg max𝑘∈K˜˜𝒙⊤ 𝑘𝜽𝑡 6 Query the user’s feedback for 𝑘 7 Receive the key term-level feedback ˜𝑟𝑘,𝑡 8 𝑴𝑡=𝑴𝑡+˜˜𝒙𝑘,𝑡˜˜𝒙⊤ 𝑘,𝑡 9 𝒃𝑡=𝒃𝑡+˜𝑟𝑘,𝑡˜˜𝒙𝑘,𝑡 10𝑞𝑡=𝑞𝑡−1 11 𝜽𝑡=𝑴−1 𝑡𝒃𝑡 12 Select𝑎𝑡=arg max𝑎∈A𝑡𝒙⊤𝑎𝜽𝑡+𝛼𝑡∥𝒙𝑎∥𝑀−1 𝑡 13 Ask the user’s preference for arm 𝑎𝑡 14 Observe the reward 𝑟𝑎𝑡,𝑡 15 𝑴𝑡+1=𝑴𝑡+𝒙𝑎𝑡𝒙⊤𝑎𝑡 16 𝒃𝑡+1=𝒃𝑡+𝑟𝑎𝑡,𝑡𝒙𝑎𝑡 3.1.1 Intuition Overview. Building on insights from Kannan et al . [13]and Raghavan et al . [20] , we add small perturbations to the key term contexts to deepen the exploration of users’ preferences. These perturbations increase data diversity and help uncover preferences that might be overlooked when selecting key terms directly. For instance, instead of using “comedy” alone, variations like “romantic comedy” or “dark comedy” can reveal more specific preferences. Below is the formal definition of smoothed key term contexts, where the perturbations are modeled as Gaussian noise. KDD ’25, August 3–7, 2025, Toronto, ON, Canada Maoli Liu, Zhuohua Li, Xiangxiang Dai, and John C.S. Lui Definition 1 (Smoothed Key Term Contexts) .Given a key term set K, the smoothed key term contexts are defined as {˜˜𝒙𝑘}𝑘∈K, where ˜˜𝒙𝑘=˜𝒙𝑘+𝜺𝑘for each𝑘∈K. The noise vector 𝜺𝑘is independently drawn from a truncated multivariate Gaussian distribution N(0,𝜌2· 𝑰𝑑), where 𝑰𝑑is the𝑑-dimensional identity matrix
|
https://arxiv.org/abs/2505.21393v1
|
and 𝜌2controls the level of perturbations. Each dimension of 𝜺𝑘is truncated within [−𝑅,𝑅]for some𝑅>0, i.e.,|(𝜺𝑘)𝑗|≤𝑅,∀𝑗∈[𝑑]. 3.1.2 Key Term Selection. When initiating conversations, the agent no longer selects key terms directly based on their original contexts. Instead, the agent applies a small random perturbation to each key term’s context, as defined in Definition 1 (Line 4). It then greedily selects the key term with the highest value under the perturbed contexts, i.e., 𝑘=arg max𝑘∈K˜˜𝒙⊤ 𝑘𝜽𝑡(Line 5). Remark 1.Note that the smoothed key term contexts are re-generated foreach conversation . For notational consistency, we use the same notation{˜˜𝒙𝑘}𝑘∈Kto represent the smoothed key term contexts across different conversations. 3.1.3 Conversation Frequency. Following Zhang et al . [29] , CLiSK uses a deterministic function 𝑏(𝑡)to regulate the frequency of con- versation initiation. The function 𝑏(𝑡)is monotonically increasing regarding𝑡and satisfies 𝑏(0)=0. At round𝑡, the agent initiates 𝑞(𝑡)=⌊𝑏(𝑡)⌋−⌊𝑏(𝑡−1)⌋conversations if 𝑞(𝑡)>0; otherwise, no conversation is conducted. 3.1.4 Arm Selection. CLiSK uses the Upper Confidence Bound (UCB) strategy for arm selection, a prevalent method in linear ban- dits. At round 𝑡, the agent updates its estimated preference vector 𝜽𝑡based on both arm-level and key term-level feedback. This esti- mation follows a ridge regression framework with regularization parameter𝜆, i.e., 𝜽𝑡=𝑴−1 𝑡𝒃𝑡, with 𝑴𝑡and𝒃𝑡defined as 𝑴𝑡=𝑡−1∑︁ 𝑠=1𝒙𝑎𝑠𝒙⊤ 𝑎𝑠+𝑡∑︁ 𝑠=1∑︁ 𝑘∈K𝑠˜˜𝒙𝑘˜˜𝒙⊤ 𝑘+𝜆𝑰𝑑, 𝒃𝑡=𝑡−1∑︁ 𝑠=1𝑟𝑎𝑠,𝑠𝒙𝑎𝑠+𝑡∑︁ 𝑠=1∑︁ 𝑘∈K𝑠˜𝑟𝑘,𝑠˜˜𝒙𝑘, whereK𝑠is the set of key terms selected at round 𝑠.𝑴𝑡is commonly referred to as the covariance matrix. After the update, the agent selects the arm with the highest UCB value, i.e., 𝑎𝑡=arg max𝑎∈A𝑡𝒙⊤𝑎𝜽𝑡+𝛼𝑡∥𝒙𝑎∥𝑴−1 𝑡, where∥𝒙∥𝑴 denotes the Mahalanobis norm√ 𝒙⊤𝑴𝒙and{𝛼𝑡}𝑡>0are parameters designed to balance the exploration-exploitation trade-off. 3.2 CLiME Algorithm To enable more adaptive and flexible conversation initiation, we in- troduce the CLiME algorithm, detailed in Algorithm 2. The CLiME adopts the same arm selection strategy as CLiSK, but it introduces key innovations in determining when to initiate conversations and which key terms to select. Unlike CLiSK, which follows a determin- istic function 𝑏(𝑡)for scheduling conversations, CLiME adaptively determines when to conduct a conversation based on the uncer- tainty in the preference estimation.Algorithm 2: CLiME Input:A,K,𝜆,𝛼,{𝛼𝑡}𝑡>0 Initialization: 𝑴1=𝜆𝑰𝑑,𝒃1=0𝑑 1for𝑡=1,...,𝑇 do 2 ifUncertaintyChecking( 𝑡)then 3 Diagonalize 𝑴𝑡=Í𝑑 𝑖=1𝜆𝒗𝑖𝒗𝑖𝒗𝑖⊤ 4 foreach𝜆𝒗𝑖<𝛼𝑡do 5 𝑘=arg max𝑘∈K|˜𝒙⊤ 𝑘𝒗𝑖| 6 𝑛𝑘=⌈(𝛼𝑡−𝜆𝒗𝑖)/𝑐2 0⌉ 7 Schedule𝑛𝑘conversations about the key term 𝑘 before next uncertainty checking 8 Update 𝑴𝑡and𝒃𝑡accordingly 9 𝜽𝑡=𝑴−1 𝑡𝒃𝑡 10 Select𝑎𝑡=arg max𝑎∈A𝑡𝒙⊤𝑎𝜽𝑡+𝛼𝑡∥𝒙𝑎∥𝑀−1 𝑡 11 Ask the user’s preference for arm 𝑎𝑡 12 Observe the reward 𝑟𝑎𝑡,𝑡 13 𝑴𝑡+1=𝑴𝑡+𝒙𝑎𝑡𝒙⊤𝑎𝑡 14 𝒃𝑡+1=𝒃𝑡+𝑟𝑎𝑡,𝑡𝒙𝑎𝑡 3.2.1 Intuition Overview. The main idea behind CLiME is to adap- tively initiate conversations based on the current level of uncer- tainty in the estimated preference and use key terms to explore the uncertain directions effectively. Specifically, the covariance ma- trix𝑴𝑡encodes information about the feature space, where its eigenvectors represent the principal directions within the space, and the corresponding eigenvalues indicate the level of uncertainty along these directions. A smaller eigenvalue indicates a higher uncertainty in the associated direction. Therefore, by guiding the agent to explore such high-uncertainty directions, the agent can reduce uncertainty and improve learning efficiency. If the minimum eigenvalue of 𝑴𝑡remains above a certain value, the agent ensures sufficient exploration
|
https://arxiv.org/abs/2505.21393v1
|
of the feature space. To facilitate exploration, we introduce the following assumption. Assumption 3. We assume that the elements in the key term set K are sufficiently rich and diverse, such that for any 𝒙∈R𝑑satisfying ∥𝒙∥2=1, there exists a key term 𝑘∈K such that|˜𝒙⊤ 𝑘𝒙|≥𝑐0, where𝑐0is some constant close to 1. This mild assumption ensures that the key term set Kis com- prehensive enough to cover all relevant directions in the feature space. In other words, for any direction 𝒙that the agent might need to explore, there exists a key term 𝑘∈K whose context ˜𝒙𝑘aligns sufficiently well with 𝒙. This diversity allows the agent to effec- tively reduce uncertainty by exploring underrepresented directions, thereby improving preference learning. 3.2.2 Conversation Initiation and Key Term Selection. In CLiME, conversation initiation and key term selection are designed to max- imize the information gained from user interactions. As shown in Algorithm 2, the agent first evaluates the eigenvalues of the covariance matrix 𝑴𝑡(Line 3). If any eigenvalue 𝜆𝒗𝑖falls below a certain threshold (derived from Section 4.2), i.e., 𝜆𝒗𝑖<𝛼𝑡(Line 4), the agent prompts 𝑛𝑘=⌈(𝛼𝑡−𝜆𝒗𝑖)/𝑐2 0⌉conversations by selecting Leveraging the Power of Conversations: Optimal Key Term Selection in Conversational Contextual Bandits KDD ’25, August 3–7, 2025, Toronto, ON, Canada key terms that most closely align with the corresponding eigen- vector 𝒗𝑖(Lines 5 to 7). Here, 𝛼∈(0,𝑐2 0)is an exploration control parameter that regulates the exploration level. Note that the agent can distribute these 𝑛𝑘conversations across multiple rounds before re-evaluating the eigenvalues of the covariance matrix. To further enhance flexibility and accommodate diverse real- world applications, we design an uncertainty checking function UncertaintyChecking( 𝑡)(Line 2). This function determines when to assess uncertainty and potentially trigger conversations. Exam- ples of such checking functions are given as follows. •Continuous Checking : The agent assesses uncertainty at every round and initiates conversations as needed. •Fixed Interval Checking : The agent assesses uncertainty every 𝑃rounds, where 𝑃is a fixed integer. •Exponential Phase Checking : The agent evaluates uncertainty at exponentially increasing intervals of 2𝑖, where𝑖=1,2,.... Remark 2.The uncertainty checking functions in CLiME differ fundamentally from the frequency function 𝑏(𝑡)in ConUCB [ 29]. Specifically, these checking functions regulate how often uncer- tainty is assessed but do not directly dictate conversation initiation. In contrast, 𝑏(𝑡)deterministically controls both the timing and number of conversations. CLiME and ConUCB also differ in how they select key terms, further distinguishing the two approaches. Remark 3.It is worth noting that the smoothed key term contexts approach in CLiSK and the adaptive conversation technique in CLiME are orthogonal. The two strategies can operate indepen- dently or be integrated to enhance learning efficiency further. To this end, we introduce the CLiSK-ME algorithm, detailed in Ap- pendix A.1, which integrates both approaches to leverage their complementary strengths. 4 Theoretical Analysis This section presents the theoretical results of our algorithms, which employ analytical techniques that differ from standard linear bandit methods. Detailed proofs of all lemmas and theorems are provided in the Appendices. 4.1 Regret Analysis of CLiSK Algorithm Following Zhang et al . [29] and Wang et al . [25] , we assume
|
https://arxiv.org/abs/2505.21393v1
|
𝑏(𝑡)= 𝑏𝑡for some𝑏∈(0,1). We start with Lemma 1, which bounds the difference between the estimated and true rewards for each arm. Lemma 1. Under Assumptions 1 and 2, for CLiSK, for any round 𝑡∈[𝑇]and any arm 𝑎∈A, with probability at least 1−𝛿for some 𝛿∈(0,1), we have 𝒙⊤ 𝑎𝜽𝑡−𝒙⊤ 𝑎𝜽∗ ≤𝛼𝑡∥𝒙𝑎∥𝑴−1 𝑡, where𝛼𝑡=vt 2 log(1 𝛿)+𝑑log 1+𝑡+ 1+√ 𝑑𝑅 𝑏𝑡 𝜆𝑑! +√ 𝜆. Next, we examine the smoothed key term contexts and their impact on exploring the feature space. Lemma 2. For any round 𝑡∈ [𝑇], with the smoothed key term contexts in Definition 1, CLiSK has the following lower bound on theminimum eigenvalue of the matrix E[˜˜𝒙𝑘˜˜𝒙⊤ 𝑘]for any𝑘∈K𝑡, i.e., 𝜆min E[˜˜𝒙𝑘˜˜𝒙⊤ 𝑘] ≥𝑐1𝜌2 log|K|≜𝜆K, where𝑐1∈(0,1)is some constant. Lemma 2 provides a lower bound on the minimum eigenvalue of the expected outer product of the selected key term. Intuitively, this implies that under smoothed contexts, the selected key terms exhibit sufficient diversity in the feature space, ensuring that each query contributes meaningful information about the user’s preferences. Lemma 3. For CLiSK, with probability at least 1−𝛿for some𝛿∈ (0,1), if𝑡≥𝑇0≜8(1+√ 𝑑𝑅)2 𝑏𝜆Klog 𝑑 𝛿 , we have 𝜆min© «𝑡∑︁ 𝑠=1∑︁ 𝑘∈K𝑠˜˜𝒙𝑘˜˜𝒙⊤ 𝑘ª® ¬≥𝜆K𝑏𝑡 2. Lemma 3 establishes a lower bound on the minimum eigenvalue of the Gram matrix that grows linearly with time 𝑡. This guarantees that CLiSK accumulates enough statistical information to effectively estimate the user’s preference vector through ridge regression. Fol- lowing these results, we bound ∥𝒙𝑎∥𝑴−1 𝑡in Lemma 4 and derive a high-probability regret upper bound for CLiSK in Theorem 1. Lemma 4. For CLiSK, for any 𝑎∈A, if𝑡≥𝑇0≜8(1+√ 𝑑𝑅)2 𝑏𝜆Klog 𝑑 𝛿 , with probability at least 1−𝛿for some𝛿∈(0,1),∥𝒙𝑎∥𝑴−1 𝑡≤√︃ 2 𝜆K𝑏𝑡. Theorem 1 (Regret of CLiSK) .With probability at least 1−𝛿for some𝛿∈(0,1), the regret upper bound of CLiSK satisfies R(𝑇)≤8(1+√ 𝑑𝑅)2log(|K|) 𝑐1𝜌2𝑏log𝑑 𝛿 +4√︄ 2𝑐1𝜌2𝑇 𝑏log(|K|)· © «vuuuut 2 log1 𝛿 +𝑑log© «1+𝑇+ 1+√ 𝑑𝑅 𝑏𝑇 𝜆𝑑ª®® ¬+√ 𝜆ª®®® ¬ =O(√︁ 𝑑𝑇log(𝑇)+𝑑), where𝑅and𝜌2are constants in Definition 1. 4.2 Regret Analysis of CLiME Algorithm We begin with Lemma 5, which closely parallels Lemma 1. Lemma 5. Let𝜽𝑡be the estimated preference vector at round 𝑡and 𝜽∗be the true preference vector. Under Assumptions 1, 2 and 3, for CLiME, at round 𝑡, for any arm 𝑎∈A, with probability at least 1−𝛿 (𝛿∈(0,1)), we have 𝒙⊤ 𝑎𝜽𝑡−𝒙⊤ 𝑎𝜽∗ ≤𝛼𝑡∥𝒙𝑎∥𝑴−1 𝑡, where𝛼𝑡=√︄ 2 log(1 𝛿)+𝑑log 1+𝑡+𝛼𝑑𝑡 𝜆𝑑𝑐2 0 +√ 𝜆,𝛼is an exploration control factor in Algorithm 2, and 𝑐0is a constant in Assumption 3. Since conversations are initiated adaptively in CLiME, the num- ber of conversations conducted up to each round 𝑡is not determin- istic. A key challenge to prove Lemma 5 is to bound this quantity. Then, we present Lemma 6, which bounds ∥𝒙𝑎∥𝑴−1 𝑡. KDD ’25, August 3–7, 2025, Toronto, ON, Canada Maoli Liu, Zhuohua Li, Xiangxiang Dai, and John C.S. Lui Lemma 6. For CLiME, for any arm 𝑎∈A, with probability at least 1−𝛿for some𝛿∈(0,1), at round𝑡≥2𝑃, we have∥𝒙𝑎∥𝑴−1 𝑡≤√︃ 2 𝛼𝑡, where𝑃is a fixed integer. The proof of Lemma 6 relies on establishing a lower bound on the minimum eigenvalue of 𝑴𝑡, i.e.,𝜆min(𝑴𝑡)≥𝛼𝑡, which involves a delicate analysis of
|
https://arxiv.org/abs/2505.21393v1
|
covariance matrix eigenvalues. The condition 𝑡≥2𝑃is introduced to generalize all three checking functions. Building on this, we derive the following theorem for CLiME. Theorem 2 (Regret of CLiME) .With probability at least 1−𝛿for some𝛿∈(0,1), the regret upper bound of CLiME satisfies R(𝑇)≤4√︂ 2𝑇 𝛼© «vut 2 log(1 𝛿)+𝑑log 1+𝑇+𝛼𝑑𝑇 𝜆𝑑𝑐2 0! +√ 𝜆ª® ¬+2𝑃 =O√︁ 𝑑𝑇log(𝑇) . Remark 4.Note that Theorem 2 applies to all three uncertainty checking functions discussed in CLiME algorithm, which under- scores the generality of our methods. CLiSK-ME combines the advantages of both smoothed key term contexts and adaptive conversation techniques, ensuring efficient exploration while adaptively adjusting conversation frequency based on uncertainty. As a result, we derive the following corollary. Corollary 1. With probability at least 1−𝛿for some𝛿∈(0,1), the regret upper bound of CLiSK-ME satisfies R(𝑇)=O(√︁ 𝑑𝑇log(𝑇)). 4.3 Lower Bound for Conversational Bandits We establish a regret lower bound for conversational bandits with finite andtime-varying arm sets. Our result is novel because the well- known lower bound Ω(√ 𝑑𝑇)by Chu et al . [6] does not consider conversational information and thus cannot be directly applied to our setting. Additionally, the existing lower bound for federated conversational bandits [ 19] is also inapplicable, as it assumes a fixed arm set. The detailed proof is given in Appendix A.11. Theorem 3 (Regret lower bound) .For any policy that chooses at most one key term per time step, there exists an instance of the conversational bandit problem such that the expected regret is at least Ω(√ 𝑑𝑇). Furthermore, for any 𝑇=2𝑚with𝑚∈[𝑑], the regret is at leastΩ(√︁ 𝑑𝑇log(𝑇)). 4.4 Discussion on Optimality To the best of our knowledge, we are the first to propose algorithms for conversational contextual bandits that achieve the optimal regret bound of order eO(√ 𝑑𝑇). We summarize the regret bounds of our proposed algorithms and related algorithms in Table 1 and discuss the theoretical improvements over existing methods. The regret upper bound of LinUCB [ 1] isO(𝑑√ 𝑇log𝑇), which serves as a standard benchmark in contextual linear bandits. The first algorithm for conversational bandits, ConUCB [ 29], offers the same regret upper bound as LinUCB, indicating that it does not offer a substantial theoretical improvement over the non-conversational algorithms. Since then, improving regret through conversational interactions has remained an open problem in the field. UnderTable 1: Comparison of theoretical regret bounds. Algorithm Conversational Regret LinUCB [1] ✗ O(𝑑√ 𝑇log𝑇) ConUCB [29], ConLinUCB-MCR [25] ✓ O(𝑑√ 𝑇log𝑇) ConLinUCB-BS [25] ✓ At leastO(𝑑√︁ 𝑇log𝑇)* CLiSK (Ours, Theorem 1) ✓ O(√︁ 𝑑𝑇log𝑇+𝑑) CLiME (Ours, Theorem 2) ✓ O(√︁ 𝑑𝑇log𝑇) CLiSK-ME (Ours, Corollary 1) ✓ O(√︁ 𝑑𝑇log𝑇) *The original paper claims a regret of O(√︁ 𝑑𝑇log𝑇)but its analysis is flawed. the assumption that the key term set Kspans R𝑑, ConLinUCB- BS [25] achieves a regret upper bound of O(1√ 𝜆B√︁ 𝑑𝑇log𝑇), where 𝜆B≔𝜆min E𝑘∈unif(B)h ˜𝒙𝑘˜𝒙⊤ 𝑘i andBis the barycentric spanner of K. The authors assume 𝜆Bis a constant, leading to a regret bound of O(√︁ 𝑑𝑇log𝑇). However, this assumption is incorrect as 𝜆Bdepends on the dimension 𝑑and is not a constant. Specifically, denoting 𝑿≔E𝑘∈unif(B)h ˜𝒙𝑘˜𝒙⊤ 𝑘i and{𝜆𝑖}𝑑 𝑖=1as its eigenvalues, we use
|
https://arxiv.org/abs/2505.21393v1
|
the fact that∥˜𝒙𝑘∥=1and obtain Tr(𝑿)=E𝑘∈unif(B)h Tr ˜𝒙𝑘˜𝒙⊤ 𝑘i =1, thus𝜆B≤Í𝑑 𝑖=1𝜆𝑖 𝑑=Tr(𝑿) 𝑑=1 𝑑. Consequently, by plugging this result back into the regret expression, the regret bound of ConLinUCB-BS cannot be better than O(𝑑√︁ 𝑇log𝑇). These previ- ous attempts underscore the significance of our work. In contrast, with the smoothed key term context technique and with the adap- tive conversation technique, our algorithms achieve a better regret bound ofO(√︁ 𝑑𝑇log𝑇+𝑑)andO(√︁ 𝑑𝑇log𝑇), respectively. These improvements successfully match the lower bound (Theorem 3) up to logarithmic factors in their dependence on the time horizon 𝑇. 5 Evaluation In this section, we evaluate the performance of our algorithms on both synthetic and real-world datasets. All the experiments were conducted on a machine equipped with a 3.70 GHz Intel Xeon E5-1630 v4 CPU and 32GB RAM. 5.1 Experiment Setups 5.1.1 Datasets. Consistent with existing studies, we generate a synthetic dataset and use three real-world datasets: MovieLens- 25M [12], Last.fm [4], and Yelp1. For the synthetic dataset, we set the dimension 𝑑=50, the number of users 𝑁=200, the number of arms |A|=5,000, and the number of key terms |K|=1,000. We generate it fol- lowing Zhang et al . [29] . First, for each key term 𝑘∈K, we sample a pseudo feature vector ¤𝒙𝑘with each dimension drawn from a uniform distribution U(− 1,1). For each arm 𝑖∈A, we randomly select an integer 𝑛𝑖∈{1,2,..., 5}and uniformly sample a subset of key termsK𝑖⊂K with|K𝑖|=𝑛𝑖. The weight is defined as 𝑤𝑖,𝑘=1/𝑛𝑖for each𝑘∈K𝑖. For each arm 𝑖, the feature vector 𝒙𝑖is drawn from a multivariate Gaussian N(Í 𝑗∈K𝑖¤𝒙𝑗/𝑛𝑖,𝑰). The feature vector for each key term 𝑘, denoted by ˜𝒙𝑘, is computed as˜𝒙𝑘=Í 𝑖∈A𝑤𝑖,𝑘Í 𝑗∈A𝑤𝑗,𝑘𝒙𝑖. Finally, each user’s preference vector 1https://www.yelp.com/dataset Leveraging the Power of Conversations: Optimal Key Term Selection in Conversational Contextual Bandits KDD ’25, August 3–7, 2025, Toronto, ON, Canada 𝜽𝑢∈R𝑑is generated by sampling each dimension from U(− 1,1) and normalizing it to unit length. For the real-world datasets, we regard movies/artists/businesses as arms. To exclude unrepresentative or insufficiently informative data (such as users who have not submitted any reviews or movies with only a few reviews), we extract a subset of |A|=5,000arms with the highest number of user-assigned ratings/tags, and a subset of𝑁=200users who have assigned the most ratings/tags. Key terms are identified by using the associated movie genres, busi- ness categories, or tag IDs in the MovieLens, Yelp, and Last.fm datasets, respectively. For example, each movie is associated with a list of genres, such as “action” or “comedy”, and each business (e.g., restaurant) is categorized by terms such as “Mexican” or “Burgers”. Using the data extracted above, we create a feedback matrix 𝑹of size𝑁×|A| , where each element 𝑹𝑖,𝑗represents the user 𝑖’s feed- back to arm 𝑗. We assume that the user’s feedback is binary. For the MovieLens and Yelp datasets, a user’s feedback for a movie/business is 1 if the user’s rating is higher than 3; otherwise, the feedback is 0. For the Last.fm dataset, a user’s feedback for an artist is 1 if the user assigns a tag to the artist. Next, we generate the feature vectors
|
https://arxiv.org/abs/2505.21393v1
|
for arms 𝒙𝑖and the preference vectors for users 𝜽𝑢. Following existing works, we decompose the feedback matrix 𝑹using truncated Sin- gular Value Decomposition (SVD) as 𝑹≈𝚯𝑺𝑨⊤, where 𝚯∈R𝑁×𝑑 and𝑨∈R|A|×𝑑contain the top- 𝑑left and right singular vectors, and𝑺∈R𝑑×𝑑is a diagonal matrix with the corresponding top- 𝑑 singular values. Then each 𝜽⊤𝑢corresponds to the 𝑢-th row of 𝚯𝑺 for all𝑢∈[𝑁], and each 𝒙⊤ 𝑖corresponds to the 𝑖-th row of 𝑨for all𝑖∈A. The feature vectors for key terms are generated similarly to those in the synthetic dataset, by assigning equal weights for all key terms corresponding to each arm. 5.1.2 Baseline Algorithms. We select the following baseline algo- rithms from existing studies: (1) LinUCB [ 1]: The standard linear contextual bandit algorithm, which does not consider the conversa- tional setting and only has arm-level feedback. (2) Arm-Con [ 5]: An extension of LinUCB that initiates conversations directly from arm sets. (3) ConUCB [ 29]: The first algorithm proposed for conversa- tional contextual bandits that queries key terms when conversations are allowed. (4) ConLinUCB [ 25]: It consists of three algorithms with different key term selection strategies. ConLinUCB-BS com- putes the barycentric spanner of key terms as an exploration basis. ConLinUCB-MCR selects key terms with the largest confidence radius. ConLinUCB-UCB chooses key terms with the largest upper confidence bounds. Since ConLinUCB-BS and ConLinUCB-MCR demonstrate superior performance, we focus our comparisons on these two variants. 5.2 Evaluation Results 5.2.1 Cumulative Regret. First, we compare our algorithms against all baseline algorithms in terms of cumulative regret over 𝑇=6,000 rounds. In each round, we randomly select |A|=200arms from each dataset. For the baseline algorithms, we adopt the conver- sation frequency function 𝑏(𝑡)=5⌊log(𝑇)⌋, as specified in their original papers. We present the results for all three checking func- tions “Continuous”, “Fixed Interval”, and "Exponential Phase”, for both CLiME and CLiSK-ME. For the “Fixed Interval” function, UncertaintyChecking is triggered every 100 rounds, whereas forthe "Exponential Phase” it is triggered whenever 𝑡is a power of 2. For CLiSK, both the perturbation level 𝜌2and the truncation limit𝑅are set to 1. The results are averaged over 20 trials, and the resulting confidence intervals are included in the figures. Under the “Continuous Checking” function, as shown in Figure 2, our three algorithms consistently achieve the best performance (low- est regret) with an improvement of over 14.6% compared to the best baseline. Similar performance trends hold under the other two checking functions, as illustrated in Figures 3 and 4. These results confirm the validity of our theoretical advancements. 0 2000 4000 6000 Round (a) Synthetic dataset0.00.51.01.5Regret1e3 0 2000 4000 6000 Round (b) MovieLens dataset0.00.51.01.5Regret1e3 0 2000 4000 6000 Round (c) Yelp dataset012Regret1e3 0 2000 4000 6000 Round (d) Last.fm dataset024Regret1e3 CLiSK-ME CLiMECLiSK ConUCBLinUCB Arm-ConConLinUCB-MCR ConLinUCB-BS Figure 2: Comparison of cumulative regret where CLiME and CLiSK-ME use the “Continuous Checking” function. 0 2000 4000 6000 Round (a) Synthetic dataset0.00.51.01.5Regret1e3 0 2000 4000 6000 Round (b) MovieLens dataset0.00.51.01.5Regret1e3 0 2000 4000 6000 Round (c) Yelp dataset012Regret1e3 0 2000 4000 6000 Round (d) Last.fm dataset024Regret1e3 CLiSK-ME CLiMECLiSK ConUCBLinUCB Arm-ConConLinUCB-MCR ConLinUCB-BS Figure 3: Comparison of cumulative regret where CLiME and
|
https://arxiv.org/abs/2505.21393v1
|
CLiSK-ME use the “Fixed Interval” function. KDD ’25, August 3–7, 2025, Toronto, ON, Canada Maoli Liu, Zhuohua Li, Xiangxiang Dai, and John C.S. Lui 0 2000 4000 6000 Round (a) Synthetic dataset0.00.51.01.5Regret1e3 0 2000 4000 6000 Round (b) MovieLens dataset0.00.51.01.5Regret1e3 0 2000 4000 6000 Round (c) Yelp dataset012Regret1e3 0 2000 4000 6000 Round (d) Last.fm dataset024Regret1e3 CLiSK-ME CLiMECLiSK ConUCBLinUCB Arm-ConConLinUCB-MCR ConLinUCB-BS Figure 4: Comparison of cumulative regret where CLiME and CLiSK-ME use the “Exponential Phase” function. 5.2.2 Precision of Estimated Preference Vectors. To assess how ac- curately each algorithm learns the user’s preferences over time, we measure the average distance between the estimated vector b𝜽𝑡and the ground truth 𝜽∗for all algorithms over 1000 rounds. We present the results for the “Continuous Checking” function of CLiME and CLiSK-ME, with results for other functions provided in Appendix A.2. As shown in Figure 5, all algorithms exhibit a de- creasing estimation error over time. However, our three algorithms consistently achieve the lowest estimation error in all datasets. This is because they leverage our novel conversational mechanism to gather more informative feedback, significantly accelerating the reduction of estimation error. As a result, our algorithms estimate the user’s preference vector more quickly and accurately than the baseline methods. 5.2.3 Number of Conversations. Next, we evaluate the number of conversations initiated by CLiME. Since CLiSK and all baseline algorithms initiate conversations based on a deterministic function 𝑏(𝑡), their results are consistent across all datasets. Therefore, we plots the scenarios for 𝑏(𝑡)=5⌊log(𝑡)⌋and𝑏(𝑡)=⌊𝑡/50⌋as in prior studies. It is also important to note that although some exist- ing studies employ a logarithmic 𝑏(𝑡)in their experiments, their theoretical results require a linear 𝑏(𝑡)to hold. In contrast to the baselines, our algorithm CLiME adaptively initiates conversations depending on the current uncertainty of user preferences, providing greater flexibility and enhancing the user experience. We plot the number of conversations initiated by CLiME with different uncer- tainty checking functions across 4 datasets. As shown in Figure 6, the number of conversations increases only logarithmically with the number of rounds. 5.2.4 Running Time. To evaluate the computational efficiency, we compare the running times of our algorithms with other conver- sational methods using the MovieLens dataset across 𝑇=6,000 rounds. We separately report the total running times, as well as the 100 300 500 700 900 1100 Round (a) Synthetic dataset0.000.050.100.150.20k^µt¡µ¤k2 100 300 500 700 900 1100 Round (b) MovieLens dataset0.00.10.20.3k^µt¡µ¤k2 100 300 500 700 900 1100 Round (c) Yelp dataset0.000.050.100.150.20k^µt¡µ¤k2 100 300 500 700 900 1100 Round (d) Last.fm dataset0.00.10.20.3k^µt¡µ¤k2 CLiSK-ME CLiMECLiSK ConUCBLinUCB Arm-ConConLinUCB-MCR ConLinUCB-BSFigure 5: Comparison of estimation precision where CLiME and CLiSK-ME use the “Continuous Checking” function. (a) Continuous Checking (every round) 02550 (b) Fixed Interval Checking (every 100 rounds) 02550 0 1000 2000 3000 4000 5000 6000(c) Exponential Phase Checking (when t is a power of 2) 02550 RoundNumber of Conversations Synthetic MovieLensYelp Last.fm5blog(t)c bt=50c Figure 6: Number of conversations initiated by deterministic approaches and our adaptive approach CLiME with different uncertainty checking functions. times for picking arms and key terms. The results are averaged over 20 runs. As shown in Table 2, our
|
https://arxiv.org/abs/2505.21393v1
|
three algorithms show substan- tial improvements compared to ConUCB and exhibit performance comparable to the ConLinUCB family of algorithms. For CLiME and CLiSK-ME, while matrix operations and eigenvalue compu- tation introduce slight overhead, the algorithms remain efficient, particularly with interval and exponential checking strategies. Leveraging the Power of Conversations: Optimal Key Term Selection in Conversational Contextual Bandits KDD ’25, August 3–7, 2025, Toronto, ON, Canada Table 2: Comparison of Running Times for Conversational Bandit Algorithms Using the Movielens dataset. AlgorithmsRunning Time (s) Key terms Arms Total CLiSK-MEContinuous 1.169 3.443 4.651 Interval 0.332 3.361 3.723 Exponential 0.352 3.344 3.724 CLiMEContinuous 0.803 3.371 4.205 Interval 0.021 3.341 3.390 Exponential 0.014 3.334 3.375 CLiSK 0.490 3.339 3.857 ConUCB 0.011 8.362 8.403 ConLinUCBUCB 0.009 3.354 3.392 MCR 0.007 3.337 3.371 BS 0.006 3.334 3.366 5.2.5 Ablation Study. We conduct an ablation study evaluating the effect of the truncation limit 𝑅. Specifically, we analyze how different values of 𝑅affect algorithm performance by comparing the cumulative regrets at round 6,000 across all datasets, as shown in Figure 7. The results indicate that increasing 𝑅from 0.1 to 3.1 leads to a decrease in regret, with performance stabilizing when 𝑅>2. For the perturbation level 𝜌2, we observe that varying it from 0.1 to 3 results in no significant change in regret. Therefore, we do not include a separate figure for this parameter. 0.10.30.50.70.91.11.31.51.71.92.12.32.52.72.93.1 Truncation Limit R (a) Synthetic dataset0.000.250.500.751.00Regret1e3 0.10.30.50.70.91.11.31.51.71.92.12.32.52.72.93.1 Truncation Limit R (b) MovieLens dataset0.000.250.500.751.001.25Regret1e3 0.10.30.50.70.91.11.31.51.71.92.12.32.52.72.93.1 Truncation Limit R (c) Yelp dataset0.00.51.01.5Regret1e3 0.10.30.50.70.91.11.31.51.71.92.12.32.52.72.93.1 Truncation Limit R (d) Last.fm dataset0.00.51.01.52.02.5Regret1e3 Figure 7: Effect of the truncation limit 𝑅. 6 Related Work Our research is closely aligned with studies on conversational con- textual bandits, particularly focusing on the problem of key term selection within this framework. Contextual bandits serve as a fundamental framework for online sequential decision-making problems, covering applications likerecommender systems [ 6,15] and computer networking [ 10]. Con- textual bandit algorithms aim to maximize the cumulative reward in the long run while making the trade-off between exploitation and exploration. Prominent algorithms include LinUCB [ 1] and Thompson Sampling (TS) [2]. To address the cold start problem, conversational recommender systems (CRSs) [ 5,23,30] are proposed to engage users in con- versations to learn their preferences more effectively. Zhang et al . [29] extend the standard contextual bandits to model conversa- tional interactions, and the pioneering ConUCB algorithm with a regret upper bound O(𝑑√ 𝑇log𝑇). Following the foundational work of Zhang et al . [29] , a branch of research has advanced this field. Li et al. [16] design the first TS-type algorithm ConTS. Wu et al . [26] propose a clustering-based algorithm to automatically generate key terms. Zuo et al . [32] propose Hier-UCB and Hier-LinUCB, leverag- ing the hierarchical structures between key terms and items. Xie et al. [27] introduce a comparison-based conversation framework and propose RelativeConUCB. Zhao et al . [31] integrate knowl- edge graphs into conversational bandits. Li et al . [19] investigate federated conversational bandits. Dai et al . [7,8]study the conver- sational bandits with misspecified/corrupted models. To enhance learning efficiency, Dai et al . [9]consider
|
https://arxiv.org/abs/2505.21393v1
|
multi-agent LLM response identification with a fixed arm set. Wang et al . [25] and Yang et al . [28] investigate the key term selection strategies and propose the ConLinUCB-BS and ConDuel algorithms, respectively. Both algo- rithms uniformly select key terms from the barycentric spanner of the key term set. The smoothed analysis for contextual bandits has been widely studied recently [ 13,17,18,20–22]. The smoothed setting bridges i.i.d. distributional and adversarial contexts. Kannan et al . [13] first introduce the smoothed analysis for linear contextual bandits, show- ing that small perturbations can lead to sublinear regret with a greedy algorithm. Raghavan et al . [21] and Raghavan et al . [20] show that the greedy algorithm achieves the best possible Bayesian regret in this setting. Sivakumar et al . [22] extend the smoothed analysis to structured linear bandits. Building on these insights, we apply the smoothed key term contexts in conversational contextual bandits. 7 Conclusion In this paper, we studied key term selection strategies for conversa- tional contextual bandits and introduced three novel algorithms: CLiSK, CLiME, and CLiSK-ME. CLiSK leverages smoothed key term contexts to enhance exploration, while CLiME adaptively initiates conversations with key terms that minimize uncertainty in the feature space. CLiSK-ME integrates both techniques, further im- proving learning efficiency. We proved that all three algorithms achieve tighter regret bounds than prior studies. Extensive evalua- tions showed that our algorithms outperform other conversational bandit algorithms. Acknowledgments The work of John C.S. Lui was supported in part by the RGC GRF- 14202923. KDD ’25, August 3–7, 2025, Toronto, ON, Canada Maoli Liu, Zhuohua Li, Xiangxiang Dai, and John C.S. Lui References [1]Yasin Abbasi-Yadkori, Dávid Pál, and Csaba Szepesvári. 2011. Improved algo- rithms for linear stochastic bandits. Advances in neural information processing systems 24 (2011). [2]Shipra Agrawal and Navin Goyal. 2012. Analysis of thompson sampling for the multi-armed bandit problem. In Conference on learning theory . JMLR Workshop and Conference Proceedings, 39–1. [3]J. Bretagnolle and C. Huber. 1978. Estimation des densités : Risque minimax. InSéminaire de Probabilités XII , C. Dellacherie, P. A. Meyer, and M. Weil (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 342–363. [4]Ivan Cantador, Peter Brusilovsky, and Tsvi Kuflik. 2011. Second Workshop on Information Heterogeneity and Fusion in Recommender Systems (HetRec2011). InProceedings of the Fifth ACM Conference on Recommender Systems (RecSys ’11) . 387–388. [5]Konstantina Christakopoulou, Filip Radlinski, and Katja Hofmann. 2016. Towards conversational recommender systems. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining . 815–824. [6]Wei Chu, Lihong Li, Lev Reyzin, and Robert Schapire. 2011. Contextual bandits with linear payoff functions. In Proceedings of the Fourteenth International Con- ference on Artificial Intelligence and Statistics . JMLR Workshop and Conference Proceedings, 208–214. [7]Xiangxiang Dai, Zhiyong Wang, Jize Xie, Xutong Liu, and John CS Lui. 2024. Conversational Recommendation with Online Learning and Clustering on Mis- specified Users. IEEE Transactions on Knowledge and Data Engineering 36, 12 (2024), 7825–7838. [8]Xiangxiang Dai, Zhiyong Wang, Jize Xie, Tong Yu, and John CS Lui. 2024. Online Learning and Detecting Corrupted Users for Conversational Recommendation Systems. IEEE Transactions on Knowledge
|
https://arxiv.org/abs/2505.21393v1
|
and Data Engineering 36, 12 (2024), 8939–8953. [9]Xiangxiang Dai, Yuejin Xie, Maoli Liu, Xuchuang Wang, Zhuohua Li, Huanyu Wang, and John C. S. Lui. 2025. Multi-Agent Conversational Online Learning for Adaptive LLM Response Identification. arXiv:2501.01849 [cs.HC] [10] Yi Gai, Bhaskar Krishnamachari, and Rahul Jain. 2012. Combinatorial network optimization with unknown variables: Multi-armed bandits with linear rewards and individual observations. IEEE/ACM Transactions on Networking 20, 5 (2012), 1466–1478. [11] Chongming Gao, Wenqiang Lei, Xiangnan He, Maarten de Rijke, and Tat-Seng Chua. 2021. Advances and challenges in conversational recommender systems: A survey. AI open 2 (2021), 100–126. [12] F. Maxwell Harper and Joseph A. Konstan. 2015. The MovieLens Datasets: History and Context. ACM Trans. Interact. Intell. Syst. 5, 4, Article 19 (dec 2015), 19 pages. [13] Sampath Kannan, Jamie H Morgenstern, Aaron Roth, Bo Waggoner, and Zhi- wei Steven Wu. 2018. A smoothed analysis of the greedy algorithm for the linear contextual bandit problem. Advances in neural information processing systems 31 (2018). [14] Tor Lattimore and Csaba Szepesvári. 2020. Bandit Algorithms . Cambridge Uni- versity Press. [15] Lihong Li, Wei Chu, John Langford, and Robert E Schapire. 2010. A contextual- bandit approach to personalized news article recommendation. In Proceedings of the 19th international conference on World wide web . 661–670. [16] Shijun Li, Wenqiang Lei, Qingyun Wu, Xiangnan He, Peng Jiang, and Tat-Seng Chua. 2021. Seamlessly unifying attributes and items: Conversational recommen- dation for cold-start users. ACM Transactions on Information Systems (TOIS) 39, 4 (2021), 1–29. [17] Zhuohua Li, Maoli Liu, Xiangxiang Dai, and John C.S. Lui. 2025. Demystify- ing Online Clustering of Bandits: Enhanced Exploration Under Stochastic and Smoothed Adversarial Contexts. In The Thirteenth International Conference on Learning Representations . [18] Zhuohua Li, Maoli Liu, Xiangxiang Dai, and John C.S. Lui. 2025. Towards Efficient Conversational Recommendations: Expected Value of Information Meets Bandit Learning. In Proceedings of the ACM on Web Conference 2025 (Sydney NSW, Australia) (WWW ’25) . 4226–4238. [19] Zhuohua Li, Maoli Liu, and John C. S. Lui. 2024. FedConPE: Efficient Federated Conversational Bandits with Heterogeneous Clients. In Proceedings of the Thirty- Third International Joint Conference on Artificial Intelligence, IJCAI-24 . 4533–4541. [20] Manish Raghavan, Aleksandrs Slivkins, Jennifer Wortman Vaughan, and Zhi- wei Steven Wu. 2023. Greedy algorithm almost dominates in smoothed contextual bandits. SIAM J. Comput. 52, 2 (2023), 487–524. [21] Manish Raghavan, Aleksandrs Slivkins, Jennifer Vaughan Wortman, and Zhi- wei Steven Wu. 2018. The externalities of exploration and how data diversity helps exploitation. In Conference on Learning Theory . PMLR, 1724–1738. [22] Vidyashankar Sivakumar, Steven Wu, and Arindam Banerjee. 2020. Structured lin- ear contextual bandits: A sharp and geometric smoothed analysis. In International Conference on Machine Learning . PMLR, 9026–9035. [23] Yueming Sun and Yi Zhang. 2018. Conversational recommender system. In The 41st international acm sigir conference on research & development in information retrieval . 235–244.[24] Joel A. Tropp. 2011. User-Friendly Tail Bounds for Sums of Random Matrices. Foundations of Computational Mathematics 12, 4 (Aug. 2011), 389–434. [25] Zhiyong Wang, Xutong Liu, Shuai Li, and John CS Lui. 2023. Efficient explorative key-term selection strategies for conversational contextual bandits. In Proceedings
|
https://arxiv.org/abs/2505.21393v1
|
of the AAAI Conference on Artificial Intelligence , Vol. 37. 10288–10295. [26] Junda Wu, Canzhe Zhao, Tong Yu, Jingyang Li, and Shuai Li. 2021. Clustering of conversational bandits for user preference learning and elicitation. In Pro- ceedings of the 30th ACM International Conference on Information & Knowledge Management . 2129–2139. [27] Zhihui Xie, Tong Yu, Canzhe Zhao, and Shuai Li. 2021. Comparison-based conversational recommender system with relative bandit feedback. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval . 1400–1409. [28] Shuhua Yang, Hui Yuan, Xiaoying Zhang, Mengdi Wang, Hong Zhang, and Huazheng Wang. 2024. Conversational Dueling Bandits in Generalized Linear Models. arXiv preprint arXiv:2407.18488 (2024). [29] Xiaoying Zhang, Hong Xie, Hang Li, and John CS Lui. 2020. Conversational contextual bandit: Algorithm and application. In Proceedings of the web conference 2020. 662–672. [30] Yongfeng Zhang, Xu Chen, Qingyao Ai, Liu Yang, and W Bruce Croft. 2018. Towards conversational search and recommendation: System ask, user respond. In Proceedings of the 27th acm international conference on information and knowledge management . 177–186. [31] Canzhe Zhao, Tong Yu, Zhihui Xie, and Shuai Li. 2022. Knowledge-aware conver- sational preference elicitation with bandit feedback. In Proceedings of the ACM Web Conference 2022 . 483–492. [32] Jinhang Zuo, Songwen Hu, Tong Yu, Shuai Li, Handong Zhao, and Carlee Joe- Wong. 2022. Hierarchical conversational preference elicitation with bandit feed- back. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management . 2827–2836. A Appendix A.1 CLiSK-ME Algorithm In this section, we present the details of the CLiSK-ME algorithm (Algorithm 3), which integrates the smoothed key term contexts and the adaptive conversation technique. Algorithm 3: CLiSK-ME Input:A,K,𝑏(𝑡),𝜆,{𝛼𝑡}𝑡>0 Initialization: 𝑴1=𝜆𝑰𝑑,𝒃1=0𝑑 1for𝑡=1,...,𝑇 do 2 ifUncertaintyChecking( 𝑡)then 3 Diagonalize 𝑴𝑡=Í𝑑 𝑖=1𝜆𝒗𝑖𝒗𝑖𝒗𝑖⊤ 4 foreach𝜆𝒗𝑖<𝛼𝑡do 5 𝑛𝒗𝑖=⌈(𝛼𝑡−𝜆𝒗𝑖)/𝑐2 0⌉ 6 for𝑛𝒗𝑖>0do 7 Smooth the key term contexts to get {˜˜𝒙𝑘}𝑘∈K 8 𝑘=arg max𝑘∈K|˜˜𝒙⊤ 𝑘𝒗𝑖| 9 Receive the key term-level feedback ˜𝑟𝑘,𝑡 10 𝑴𝑡=𝑴𝑡+˜˜𝒙𝑘,𝑡˜˜𝒙⊤ 𝑘,𝑡 11 𝒃𝑡=𝒃𝑡+˜𝑟𝑘,𝑡˜˜𝒙𝑘,𝑡 12 𝑛𝒗𝑖=𝑛𝒗𝑖−1 13 𝜽𝑡=𝑴−1 𝑡𝒃𝑡 14 Select𝑎𝑡=arg max𝑎∈A𝑡𝒙⊤𝑎𝜽𝑡+𝛼𝑡∥𝒙𝑎∥𝑀−1 𝑡 15 Ask the user’s preference for arm 𝑎𝑡 16 Observe the reward 𝑟𝑎𝑡,𝑡 17 𝑴𝑡+1=𝑴𝑡+𝒙𝑎𝑡𝒙⊤𝑎𝑡 18 𝒃𝑡+1=𝒃𝑡+𝑟𝑎𝑡,𝑡𝒙𝑎𝑡 Leveraging the Power of Conversations: Optimal Key Term Selection in Conversational Contextual Bandits KDD ’25, August 3–7, 2025, Toronto, ON, Canada A.2 Supplementary Experiment Results We compare the estimation precision for the “Fixed Interval” and “Exponential Phase” uncertainty checking functions of CLiME in Figures 8 and 9. In the former, UncertaintyChecking is triggered every 100 rounds while in the latter it is triggered when 𝑡is a power of 2. Combined with the results presented in the evaluation results section, the experiments demonstrate that our algorithms consistently outperform the baselines. 100 300 500 700 900 1100 Round (a) Synthetic dataset0.000.050.100.150.20k^µt¡µ¤k2 100 300 500 700 900 1100 Round (b) MovieLens dataset0.00.10.20.3k^µt¡µ¤k2 100 300 500 700 900 1100 Round (c) Yelp dataset0.000.050.100.150.20k^µt¡µ¤k2 100 300 500 700 900 1100 Round (d) Last.fm dataset0.00.10.20.3k^µt¡µ¤k2 CLiSK-ME CLiMECLiSK ConUCBLinUCB Arm-ConConLinUCB-MCR ConLinUCB-BS Figure 8: Comparison of estimation precision where CLiME and CLiSK-ME use the “Fixed Interval” function. 100 300 500 700 900 1100 Round (a) Synthetic dataset0.000.050.100.150.20k^µt¡µ¤k2 100 300 500 700 900
|
https://arxiv.org/abs/2505.21393v1
|
1100 Round (b) MovieLens dataset0.00.10.20.3k^µt¡µ¤k2 100 300 500 700 900 1100 Round (c) Yelp dataset0.000.050.100.150.20k^µt¡µ¤k2 100 300 500 700 900 1100 Round (d) Last.fm dataset0.00.10.20.3k^µt¡µ¤k2 CLiSK-ME CLiMECLiSK ConUCBLinUCB Arm-ConConLinUCB-MCR ConLinUCB-BS Figure 9: Comparison of estimation precision where CLiME and CLiSK-ME use the “Exponential Phase” function. A.3 Proof of Lemma 1 Lemma 1. Under Assumptions 1 and 2, for CLiSK, for any round 𝑡∈[𝑇]and any arm 𝑎∈A, with probability at least 1−𝛿for some𝛿∈(0,1), we have 𝒙⊤ 𝑎𝜽𝑡−𝒙⊤ 𝑎𝜽∗ ≤𝛼𝑡∥𝒙𝑎∥𝑴−1 𝑡, where𝛼𝑡=vt 2 log(1 𝛿)+𝑑log 1+𝑡+ 1+√ 𝑑𝑅 𝑏𝑡 𝜆𝑑! +√ 𝜆. Proof. For any arm 𝑎∈A, from the definition of 𝑴𝑡and𝒃𝑡, and𝜽𝑡=𝑴−1 𝑡𝒃𝑡, we have 𝒙⊤ 𝑎 𝜽𝑡−𝜽∗=𝒙⊤ 𝑎 𝑴−1 𝑡𝒃𝑡−𝜽∗ =𝒙⊤ 𝑎© «𝑴−1 𝑡© «𝑡−1∑︁ 𝑠=1𝑟𝑎𝑠,𝑠𝒙𝑎𝑠+𝑡∑︁ 𝑠=1∑︁ 𝑘∈K𝑠˜𝑟𝑘,𝑠˜˜𝒙𝑘ª® ¬−𝜽∗ª® ¬ =𝒙⊤ 𝑎© «𝑴−1 𝑡© «𝑡−1∑︁ 𝑠=1𝒙𝑎𝑠 𝒙⊤ 𝑎𝑠𝜽∗+𝜂𝑠 +𝑡∑︁ 𝑠=1∑︁ 𝑘∈K𝑠˜˜𝒙𝑘 ˜˜𝒙⊤ 𝑘𝜽∗+˜𝜂𝑠ª® ¬−𝜽∗ª® ¬ =𝒙⊤ 𝑎© «𝑴−1 𝑡© «𝑡−1∑︁ 𝑠=1𝒙𝑎𝑠𝒙⊤ 𝑎𝑠+𝑡∑︁ 𝑠=1∑︁ 𝑘∈K𝑠˜˜𝒙𝑘˜˜𝒙⊤ 𝑘+𝜆𝑰𝑑−𝜆𝑰𝑑ª® ¬𝜽∗−𝜽∗ª® ¬ +𝒙⊤ 𝑎© «𝑴−1 𝑡© «𝑡−1∑︁ 𝑠=1𝒙𝑎𝑠𝜂𝑠+𝑡∑︁ 𝑠=1∑︁ 𝑘∈K𝑠˜˜𝒙𝑘˜𝜂𝑠ª® ¬ª® ¬ =𝜆𝒙⊤ 𝑎𝑴−1 𝑡𝜽∗+𝒙⊤ 𝑎© «𝑴−1 𝑡© «𝑡−1∑︁ 𝑠=1𝒙𝑎𝑠𝜂𝑠+𝑡∑︁ 𝑠=1∑︁ 𝑘∈K𝑠˜˜𝒙𝑘˜𝜂𝑠ª® ¬ª® ¬. By the Cauchy-Schwarz inequality, we have 𝒙⊤ 𝑎 𝜽𝑡−𝜽∗ ≤𝜆∥𝒙𝑎∥𝑴−1 𝑡∥𝜽∗∥𝑴−1 𝑡 +∥𝒙𝑎∥𝑴−1 𝑡 𝑡−1∑︁ 𝑠=1𝒙𝑎𝑠𝜂𝑠+𝑡∑︁ 𝑠=1∑︁ 𝑘∈K𝑠˜˜𝒙𝑘˜𝜂𝑠 𝑴−1 𝑡.(1) For the first term, by the fact that 𝜆min(𝑴𝑡)≥𝜆, and by the property of the Rayleigh quotient, we have ∥𝜽∗∥2 𝑴−1 𝑡 ∥𝜽∗∥2 2=𝜽∗⊤𝑴−1 𝑡𝜽∗ 𝜽∗⊤𝜽∗≤𝜆max(𝑴−1 𝑡)≤1 𝜆min(𝑴𝑡)≤1 𝜆. Therefore, we have 𝜆∥𝒙𝑎∥𝑴−1 𝑡 𝜽∗ 𝑴−1 𝑡≤𝜆∥𝒙𝑎∥𝑴−1 𝑡 𝜽∗ 2 ≤𝜆∥𝒙𝑎∥𝑴−1 𝑡√︂ 1 𝜆=√ 𝜆∥𝒙𝑎∥𝑴−1 𝑡. (2) For the second term, from Theorem 1 in Abbasi-Yadkori et al . [1], for any𝛿∈(0,1), with probability at least 1−𝛿, for all𝑡≥1, we have 𝑡−1∑︁ 𝑠=1𝒙𝑎𝑠𝜂𝑠+𝑡∑︁ 𝑠=1∑︁ 𝑘∈K𝑠˜˜𝒙𝑘˜𝜂𝑠 𝑴−1 𝑡≤vut 2 log det(𝑴𝑡)1 2det(𝜆𝑰𝑑)−1 2 𝛿! . (3) By adopting the determinant-trace inequality (Lemma 12), we have Tr(𝑴𝑡)≤𝑑𝜆+𝑡−1∑︁ 𝑠=1Tr(𝒙𝑎𝑠𝒙⊤ 𝑎𝑠)+𝑡∑︁ 𝑠=1∑︁ 𝑘∈K𝑠Tr(˜˜𝒙𝑘˜˜𝒙⊤ 𝑘) KDD ’25, August 3–7, 2025, Toronto, ON, Canada Maoli Liu, Zhuohua Li, Xiangxiang Dai, and John C.S. Lui ≤𝑑𝜆+𝑡+ 1+√ 𝑑𝑅 𝑏𝑡, which is obtained because there are at most 𝑏𝑡key terms selected by round𝑡and∥˜˜𝒙𝑘∥≤1+√ 𝑑𝑅for all𝑘∈K, and therefore, det(𝑴𝑡)≤Tr(𝑴𝑡) 𝑑𝑑 ≤© «𝑑𝜆+𝑡+ 1+√ 𝑑𝑅 𝑏𝑡 𝑑ª®® ¬𝑑 , (4) where Tr(𝑿)denotes the trace of matrix 𝑿. By substituting Equation (4) into Equation (3), we have 𝑡−1∑︁ 𝑠=1𝒙𝑎𝑠𝜂𝑠+𝑡∑︁ 𝑠=1∑︁ 𝑘∈K𝑠˜˜𝒙𝑘˜𝜂𝑠 𝑴−1 𝑡≤√︄ 2 log1 𝛿 +logdet(𝑴𝑡) det(𝜆𝑰𝑑) ≤vuuuut 2 log1 𝛿 +𝑑log© «1+𝑡+ 1+√ 𝑑𝑅 𝑏𝑡 𝜆𝑑ª®® ¬. (5) Plugging Equation (2) and Equation (5) into Equation (1), we have 𝒙⊤ 𝑎 𝜽𝑡−𝜽∗ ≤∥𝒙𝑎∥𝑴−1 𝑡© «√ 𝜆+vut 2 log1 𝛿 +log 1+𝑡+(1+√ 𝑑𝑅)𝑏𝑡 𝜆𝑑! ª® ¬.(6) which completes the proof. □ A.4 Proof of Lemma 2 Lemma 2. For any round 𝑡∈ [𝑇], with the smoothed key term contexts in Definition 1, CLiSK has the following lower bound on the minimum eigenvalue of the matrix E[˜˜𝒙𝑘˜˜𝒙⊤ 𝑘]for any𝑘∈K𝑡, i.e., 𝜆min E[˜˜𝒙𝑘˜˜𝒙⊤ 𝑘] ≥𝑐1𝜌2 log|K|≜𝜆K, where𝑐1∈(0,1)is some constant. Proof. Fix a time𝑡, and denote the key term selected at this time as𝑘𝑡. Although multiple key terms may be selected at each time step, they all satisfy the properties of this lemma. Therefore, we do not distinguish between them and use only a single subscript 𝑡. Let 𝑸be a unitary matrix that rotates the estimated preference vector 𝜽𝑡to
|
https://arxiv.org/abs/2505.21393v1
|
align it with the 𝑥-axis, maintaining its length but zeroing out all components except the first component, i.e., 𝑸𝜽𝑡= (∥𝜽𝑡∥,0,0,..., 0). Note that such 𝑸always exists because it just rotates the space. According to CLiSK’s key term selection strategy ˜˜𝒙𝑘𝑡=arg max𝑘∈K𝜽⊤ 𝑡˜˜𝒙𝑘, we have 𝜆min Eh ˜˜𝒙𝑘𝑡˜˜𝒙⊤ 𝑘𝑡i =𝜆min E" 𝒙𝒙⊤ 𝒙=arg max 𝑘∈K𝜽⊤ 𝑡˜˜𝒙𝑘#! =min 𝒘:∥𝒘∥=1𝒘⊤E" 𝒙𝒙⊤ 𝒙=arg max 𝑘∈K𝜽⊤ 𝑡˜˜𝒙𝑘# 𝒘 =min 𝒘:∥𝒘∥=1E" (𝒘⊤𝒙)2 𝒙=arg max 𝑘∈K𝜽⊤ 𝑡˜˜𝒙𝑘# ≥min 𝒘:∥𝒘∥=1Var" 𝒘⊤𝒙 𝒙=arg max 𝑘∈K𝜽⊤ 𝑡˜˜𝒙𝑘#=min 𝒘:∥𝒘∥=1Var" (𝑸𝒘)⊤𝑸𝒙 𝒙=arg max 𝑘∈K(𝑸𝜽𝑡)⊤𝑸˜˜𝒙𝑘# (7) =min 𝒘:∥𝒘∥=1Var" 𝒘⊤𝑸𝒙 𝒙=arg max 𝑘∈K∥𝜽𝑡∥(𝑸˜˜𝒙𝑘)1# (8) =min 𝒘:∥𝒘∥=1Var" 𝒘⊤𝑸𝜺 𝜺=arg max 𝜺𝑘:𝑘∈K(𝑸˜𝒙𝑘+𝑸𝜺𝑘)1# (9) =min 𝒘:∥𝒘∥=1Var" 𝒘⊤𝜺 𝜺=arg max 𝜺𝑘:𝑘∈K(𝑸˜𝒙𝑘+𝜺𝑘)1# (10) where Equation (7) uses the property of unitary matrices: 𝑸⊤𝑸=𝑰𝑑. Equation (8) applies matrix 𝑸so only the first component is non- zero and we use the fact that minimizing over 𝑸𝒘is equivalent to over𝒘. Equation (9) follows because each smoothed key term ˜˜𝒙𝑘= ˜𝒙𝑘+𝜺𝑘by definition, and adding a constant a to a random variable does not change its variance. Equation (10) is due to the rotation invariance of symmetrically truncated Gaussian distributions. Since 𝜺𝑘∼N( 0,𝜌2·𝑰𝑑)conditioned on|(𝜺𝑘)𝑗|≤𝑅,∀𝑗∈[𝑑], by the property of (truncated) multivariate Gaussian distributions, the components of 𝜺𝑡,𝑖can be equivalently regarded as 𝑑independent samples from a (truncated) univariate Gaussian distribution, i.e., (𝜺𝑘)𝑗∼N( 0,𝜌2)conditioned on|(𝜺𝑘)𝑗|≤𝑅,∀𝑗∈[𝑑]. Therefore, we have Var 𝒘⊤𝜺 =Var"𝑑∑︁ 𝑖=1𝒘𝑖𝜺𝑖# =𝑑∑︁ 𝑖=1𝒘2 𝑖Var[𝜺𝑖], where the exchanging of variance and summation is due to the independence of 𝜺𝑖. Therefore, we can write min 𝒘:∥𝒘∥=1Var" 𝒘⊤𝜺 𝜺=arg max 𝜺𝑘:𝑘∈K((𝜺𝑘)1+(𝑸˜𝒙𝑘)1)# =min 𝒘:∥𝒘∥=1𝑑∑︁ 𝑗=1𝒘2 𝑗Var" (𝜺)𝑗 𝜺=arg max 𝜺𝑘:𝑘∈K((𝜺𝑘)1+(𝑸˜𝒙𝑘)1)# =min 𝒘:∥𝒘∥=1( 𝒘2 1Var" (𝜺)1 𝜺=arg max 𝜺𝑘:𝑘∈K((𝜺𝑘)1+(𝑸˜𝒙𝑘)1)# +𝑑∑︁ 𝑗=2𝒘2 𝑗Var" (𝜺)𝑗 𝜺=arg max 𝜺𝑘:𝑘∈K((𝜺𝑘)1+(𝑸˜𝒙𝑘)1)# =min 𝒘:∥𝒘∥=1( 𝒘2 1Var" (𝜺)1 𝜺=arg max 𝜺𝑘:𝑘∈K((𝜺𝑘)1+(𝑸˜𝒙𝑘)1)# +𝑑∑︁ 𝑗=2𝒘2 𝑗Var (𝜺)𝑗 =min 𝒘:∥𝒘∥=1( 𝒘2 1Var" (𝜺)1 𝜺=arg max 𝜺𝑘:𝑘∈K((𝜺𝑘)1+(𝑸˜𝒙𝑘)1)# +(1−𝒘2 1)𝜌2) =min( Var" (𝜺)1 𝜺=arg max 𝜺𝑘:𝑘∈K((𝜺𝑘)1+(𝑸˜𝒙𝑘)1)# ,𝜌2) ≥𝑐1𝜌2 log|K|, Leveraging the Power of Conversations: Optimal Key Term Selection in Conversational Contextual Bandits KDD ’25, August 3–7, 2025, Toronto, ON, Canada where in the last inequality, we use Lemma 15 and Lemma 14 in Sivakumar et al. [22] and get Var" (𝜺)1 𝜺=arg max 𝜺𝑘:𝑘∈K((𝜺𝑘)1+(𝑸˜𝒙𝑘)1)# ≥Var" (𝜺)1 𝜺=arg max 𝜺𝑘:𝑘∈K(𝜺𝑘)1# ≥𝑐1𝜌2 log|K|.□ A.5 Proof of Lemma 3 Lemma 3. For CLiSK, with probability at least 1−𝛿for some𝛿∈ (0,1), if𝑡≥𝑇0≜8(1+√ 𝑑𝑅)2 𝑏𝜆Klog 𝑑 𝛿 , we have 𝜆min© «𝑡∑︁ 𝑠=1∑︁ 𝑘∈K𝑠˜˜𝒙𝑘˜˜𝒙⊤ 𝑘ª® ¬≥𝜆K𝑏𝑡 2. Proof. To apply the matrix Chernoff bound (Lemma 11), we first verify the required two conditions for the self-adjoint matrices ˜˜𝒙𝑘˜˜𝒙⊤ 𝑘for any𝑘∈K𝑠and𝑠∈[𝑡]. First, ˜˜𝒙𝑘˜˜𝒙⊤ 𝑘is obviously positive semi-definite. Second, by the Courant-Fischer theorem, 𝜆max(˜˜𝒙𝑘˜˜𝒙⊤ 𝑘)=max 𝒘:∥𝒘∥=1𝒘⊤˜˜𝒙𝑘˜˜𝒙⊤ 𝑘𝒘=max 𝒘:∥𝒘∥=1(𝒘⊤˜˜𝒙𝑘)2 ≤max 𝒘:∥𝒘∥=1∥𝒘∥2∥˜˜𝒙𝑘∥2≤(1+√ 𝑑𝑅)2. Next, by Lemma 2 and the super-additivity of the minimum eigen- value (due to Weyl’s inequality), we have 𝜇min=𝜆min© «𝑡∑︁ 𝑠=1∑︁ 𝑘∈K𝑠Eh ˜˜𝒙𝑘˜˜𝒙⊤ 𝑘iª® ¬≥𝑡∑︁ 𝑠=1∑︁ 𝑘∈K𝑠𝜆min Eh ˜˜𝒙𝑘˜˜𝒙⊤ 𝑘i ≥𝜆K𝑏𝑡, where the last inequality is because there are at most 𝑏𝑡key terms selected by round 𝑡, so the summation has at most 𝑏𝑡terms. So by Lemma 11, we have for any 𝜀∈(0,1), Pr𝜆min© «𝑡∑︁ 𝑠=1∑︁ 𝑘∈K𝑠˜˜𝒙𝑘˜˜𝒙⊤ 𝑘ª® ¬≤(1−𝜀)𝜆K𝑏𝑡 ≤Pr𝜆min© «𝑡∑︁ 𝑠=1∑︁ 𝑘∈K𝑠˜˜𝒙𝑘˜˜𝒙⊤ 𝑘ª® ¬≤(1−𝜀)𝜇min ≤𝑑𝑒−𝜀 (1−𝜀)1−𝜀𝜇min/(1+√ 𝑑𝑅)2 ≤𝑑𝑒−𝜀 (1−𝜀)1−𝜀𝜆K𝑏𝑡 (1+√ 𝑑𝑅)2 , where the last inequality
|
https://arxiv.org/abs/2505.21393v1
|
is because 𝑒−𝑥is decreasing. Choosing 𝜀=1 2, we get Pr𝜆min© «𝑡∑︁ 𝑠=1∑︁ 𝑘∈K𝑠˜˜𝒙𝑘˜˜𝒙⊤ 𝑘ª® ¬≤𝜆K𝑏𝑡 2≤𝑑√ 2𝑒−1 2𝜆K𝑏𝑡 (1+√ 𝑑𝑅)2. Letting the RHS be 𝛿, we get𝑡=2(1+√ 𝑑𝑅)2log(𝑑 𝛿) 𝜆K𝑏(1−log(2))≤8(1+√ 𝑑𝑅)2 𝜆K𝑏log 𝑑 𝛿 . Therefore,𝜆minÍ𝑡 𝑠=1Í 𝑘∈K𝑠˜˜𝒙𝑘˜˜𝒙⊤ 𝑘 ≥𝜆K𝑏𝑡 2holds with probability at least 1−𝛿when𝑡≥8(1+√ 𝑑𝑅)2 𝜆K𝑏log 𝑑 𝛿 . □A.6 Proof of Lemma 4 Lemma 4. For CLiSK, for any 𝑎∈A, if𝑡≥𝑇0≜8(1+√ 𝑑𝑅)2 𝑏𝜆Klog 𝑑 𝛿 , with probability at least 1−𝛿for some𝛿∈(0,1),∥𝒙𝑎∥𝑴−1 𝑡≤√︃ 2 𝜆K𝑏𝑡. Proof. ∥𝒙𝑎∥𝑴−1 𝑡=√︃ 𝒙⊤𝑎𝑴−1 𝑡𝒙𝑎≤√︃ 𝜆max(𝑴−1 𝑡)𝒙⊤𝑎𝒙𝑎=√︂1 𝜆min(𝑴𝑡), (11) where the first inequality is due to the property of the Rayleigh quotient, and the second inequality is due to the fact that 𝒙⊤𝑎𝒙𝑎=1. By the definition of 𝑴𝑡, we have 𝜆min(𝑴𝑡)=𝜆min© «𝑡−1∑︁ 𝑠=1𝒙𝑎𝑠𝒙⊤ 𝑎𝑠+𝑡∑︁ 𝑠=1∑︁ 𝑘∈K𝑠˜˜𝒙𝑘˜˜𝒙⊤ 𝑘+𝜆𝑰𝑑ª® ¬ ≥𝜆min© «𝑡∑︁ 𝑠=1∑︁ 𝑘∈K𝑠˜˜𝒙𝑘˜˜𝒙⊤ 𝑘ª® ¬ ≥𝜆K𝑏𝑡 2, (12) where the first inequality follows the property of Loewner order that if 𝑨⪰𝑩then𝜆min(𝑨) ≥𝜆min(𝑩), and the last inequality follows from Lemma 3 conditioned on 𝑡≥𝑇0. Therefore, by plugging Equation (12) into Equation (11), we have ∥𝒙𝑎∥𝑴−1 𝑡≤√︃ 2 𝜆K𝑏𝑡. □ A.7 Proof of Lemma 5 Lemma 5. Let𝜽𝑡be the estimated preference vector at round 𝑡and 𝜽∗be the true preference vector. Under Assumptions 1, 2 and 3, for CLiME, at round 𝑡, for any arm 𝑎∈A, with probability at least 1−𝛿 (𝛿∈(0,1)), we have 𝒙⊤ 𝑎𝜽𝑡−𝒙⊤ 𝑎𝜽∗ ≤𝛼𝑡∥𝒙𝑎∥𝑴−1 𝑡, where𝛼𝑡=√︄ 2 log(1 𝛿)+𝑑log 1+𝑡+𝛼𝑑𝑡 𝜆𝑑𝑐2 0 +√ 𝜆,𝛼is an exploration control factor in Algorithm 2, and 𝑐0is a constant in Assumption 3. Proof. The proof of Lemma 5 is similar to that of Lemma 1. The only difference is the trace and determinant of the matrix 𝑴𝑡. We first show that by round 𝑡, at most𝛼𝑑𝑡 𝑐2 0key terms have been selected since the beginning of the algorithm for all three uncertainty checking functions. Consider the case where CLiME uses the “Continuous Checking” function, i.e., the agent checks the eigenvalues of the matrix 𝑴𝑡 at each round. We first denote the covariance matrix before select- ing key terms at round 𝑡by𝑴′ 𝑡, i.e., 𝑴𝑡=𝑴′ 𝑡+Í 𝑘∈K𝑡𝒙𝑘𝒙⊤ 𝑘. For 𝑴′ 𝑡, denote its eigenvectors by {𝒗𝑖}𝑑 𝑖=1and corresponding eigen- values by{𝜆𝒗𝑖}𝑑 𝑖=1. If some key term 𝑘is selected at round 𝑡, then there must exist an eigenvector 𝒗𝑖such that𝜆𝒗𝑖<𝛼𝑡, and the corresponding key term context ˜𝒙𝑘is close to 𝒗𝑖, i.e., ˜𝒙⊤ 𝑘𝒗𝑖≥𝑐0. We can write the vector ˜𝒙𝑘=Í𝑑 𝑖=1𝛾𝑖𝒗𝑖for some coefficients {𝛾𝑖}𝑑 𝑖=1. Then, for 𝑗∈ [𝑑], Denote 𝒛𝑗=Í𝑑 𝑖=1,𝑖≠𝑗𝛾𝑖𝒗𝑖. For any 𝑗∈ [𝑑], we have ˜𝒙⊤ 𝑘𝒗𝑗=Í𝑑 𝑖=1𝛾𝑖𝒗⊤ 𝑖𝒗𝑗=𝛾𝑗≥𝑐0and ˜𝒙𝑘˜𝒙⊤ 𝑘= KDD ’25, August 3–7, 2025, Toronto, ON, Canada Maoli Liu, Zhuohua Li, Xiangxiang Dai, and John C.S. Lui (𝛾𝑗𝒗𝑗+𝒛𝑗)(𝛾𝑗𝒗𝑗+𝒛𝑗)⊤=𝛾2 𝑗𝒗𝑗𝒗⊤ 𝑗+𝒛𝑗𝒛⊤ 𝑗, and then, we have the following: 𝑴′ 𝑡+∑︁ 𝑘∈K𝑡𝒙𝑘𝒙⊤ 𝑘=𝑴′ 𝑡+∑︁ 𝑖∈[𝑑]:𝜆𝒗𝑖≤𝛼𝑡& 𝛼𝑡−𝜆𝒗𝑖 𝑐2 0' (𝛾2 𝑖𝒗𝑖𝒗⊤ 𝑖+𝒛𝑖𝒛⊤ 𝑖) ⪰𝑑∑︁ 𝑖=1𝜆𝒗𝑖𝒗𝑖𝒗⊤ 𝑖+∑︁ 𝑖∈[𝑑]:𝜆𝒗𝑖≤𝛼𝑡𝛼𝑡−𝜆𝒗𝑖 𝑐2 0(𝛾2 𝑖𝒗𝑖𝒗⊤ 𝑖+𝒛𝑖𝒛⊤ 𝑖) ⪰𝑑∑︁ 𝑖=1𝜆𝒗𝑖𝒗𝑖𝒗⊤ 𝑖+∑︁ 𝑖∈[𝑑]:𝜆𝒗𝑖≤𝛼𝑡 𝛼𝑡−𝜆𝒗𝑖𝒗𝑖𝒗⊤ 𝑖 =∑︁ 𝑖∈[𝑑]:𝜆𝒗𝑖<𝛼𝑡(𝛼𝑡−𝜆𝒗𝑖+𝜆𝒗𝑖)𝒗𝑖𝒗⊤ 𝑖+∑︁ 𝑖∈[𝑑]:𝜆𝒗𝑖>𝛼𝑡𝜆𝒗𝑖𝒗𝑖𝒗⊤ 𝑖 ⪰𝑑∑︁ 𝑖=1𝛼𝑡𝒗𝑖𝒗⊤ 𝑖. (13) Following from Equation (13), we have 𝜆min(𝑴𝑡)≥𝛼𝑡. (14) Denote the number of key terms selected at round 𝑡as𝐾𝑡. We have𝐾𝑡=Í𝑑 𝑖=1𝛼𝑡−𝜆𝒗𝑖 𝑐2 0. Since𝜆𝒗𝑖≥𝛼(𝑡−1),∀𝑖∈[𝑑]according to Equation (14), we have 𝐾𝑡≤𝛼𝑑 𝑐2 0, and
|
https://arxiv.org/abs/2505.21393v1
|
thenÍ𝑡 𝑠=1𝐾𝑠≤𝛼𝑑𝑡 𝑐2 0. For the “Fixed Interval Checking” function, at each uncertainty checking point 𝑡𝑗=𝑗𝑃where𝑗∈{1,2,...,⌊𝑇 𝑃⌋}, we have𝜆min(𝑴𝑡𝑗)≥ 𝛼𝑡𝑗. For the𝑗-th checking, there areÍ𝑑 𝑖=1𝛼𝑡𝑗−𝜆𝒗𝑖 𝑐2 0≤Í𝑑 𝑖=1𝛼𝑡𝑗−𝛼𝑡𝑗−1 𝑐2 0≤ 𝛼𝑑𝑃 𝑐2 0conversations to be launched. Thus, by round 𝑡, the number of total conversations satisfiesÍ⌊𝑡 𝑃⌋ 𝑗=1𝛼𝑑𝑃 𝑐2 0≤𝛼𝑑𝑡 𝑐2 0. For the “Exponential Phase Checking” function, by round 𝑡, there are⌊log2(𝑡)⌋uncertainty checking points. For the 𝑗-th checking, there areÍ𝑑 𝑖=1𝛼𝑡𝑗−𝜆𝒗𝑖 𝑐2 0≤Í𝑑 𝑖=1𝛼𝑡𝑗−𝛼𝑡𝑗−1 𝑐2 0≤𝛼𝑑2𝑗−1 𝑐2 0conversations to be launched. By round 𝑡, the number of total conversations satisfiesÍ⌊log2(𝑡)⌋ 𝑗=1𝛼𝑑2𝑗−1 𝑐2 0≤𝛼𝑑𝑡 𝑐2 0. Therefore, we have Tr(𝑴𝑡)≤𝑑𝜆+𝑡−1∑︁ 𝑠=1Tr(𝒙𝑎𝑠𝒙⊤ 𝑎𝑠)+𝑡∑︁ 𝑠=1∑︁ 𝑘∈K𝑠Tr(˜𝒙𝑘˜𝒙⊤ 𝑘)≤𝑑𝜆+𝑡+𝛼𝑑𝑡 𝑐2 0, and det(𝑴𝑡)≤Tr(𝑴𝑡) 𝑑𝑑 ≤© «𝑑𝜆+𝑡+𝛼𝑑𝑡 𝑐2 0 𝑑ª® ¬𝑑 ≤ 𝜆+𝑡+𝛼𝑑𝑡 𝑐2 0𝑑!𝑑 , where the last inequality is obtained by the fact that 𝑐0<1. Following the same steps as in the proof of Lemma 1, we can obtain that 𝒙⊤ 𝑎 𝜽𝑡−𝜽∗ ≤∥𝒙𝑎∥𝑴−1 𝑡© «√ 𝜆+vut 2 log1 𝛿 +𝑑log 1+𝑡+𝛼𝑑𝑡 𝜆𝑑𝑐2 0! ª® ¬, which concludes the proof. □A.8 Proof of Lemma 6 Lemma 6. For CLiME, for any arm 𝑎∈A, with probability at least 1−𝛿for some𝛿∈(0,1), at round𝑡≥2𝑃, we have∥𝒙𝑎∥𝑴−1 𝑡≤√︃ 2 𝛼𝑡, where𝑃is a fixed integer. Proof. We first consider the case where CLiME uses the “Con- tinuous Checking” function, i.e., the agent checks the eigenval- ues of the matrix 𝑴𝑡at each round. By Equation (14), we have 𝜆min(𝑴𝑡)≥𝛼𝑡. Then, following from Equation (11) in the proof of Lemma 4, we can obtain that ∥𝒙𝑎∥𝑴−1 𝑡≤√︃ 1 𝛼𝑡. Next, we consider the case for the “Fixed Interval Checking” function. In this case, the agent only checks the eigenvalues of the matrix 𝑴𝑡every𝑃rounds. For the rounds 𝑡when the agent checks the uncertainty, we have the same results as 𝜆min(𝑴𝑡)≥𝛼𝑡; For the rounds𝑡when the agent does not check it, we have 𝜆min(𝑴𝑡)≥𝛼𝑡′ where𝑡′is the last round that the agent conducts the check and 𝑡−𝑡′≤𝑃. When𝑡≥2𝑃,𝑡′≥𝑡−𝑃≥𝑡 2, we can obtain that ∥𝒙𝑎∥𝑴−1 𝑡≤√︃ 1 𝛼𝑡′≤√︃ 2 𝛼𝑡. Finally, we consider the “Exponential Phase Checking” function. At rounds𝑡satisfying 2𝑖≤𝑡<2𝑖+1for𝑖=1,2,..., the last checking point𝑡′=2𝑖, then we have 𝜆min(𝑴𝑡)≥𝛼·2𝑖. When𝑡≥2, we have𝑡 2≤2𝑖, and then∥𝒙𝑎∥𝑴−1 𝑡≤√︃ 1 𝛼2𝑖≤√︃ 2 𝛼𝑡. Therefore, to generalize the bound, we can conclude that when 𝑡≥2𝑃,∥𝒙𝑎∥𝑴−1 𝑡≤√︃ 2 𝛼𝑡for all three checking functions. □ A.9 Proof of Theorem 1 Theorem 1 (Regret of CLiSK) .With probability at least 1−𝛿for some𝛿∈(0,1), the regret upper bound of CLiSK satisfies R(𝑇)≤8(1+√ 𝑑𝑅)2log(|K|) 𝑐1𝜌2𝑏log𝑑 𝛿 +4√︄ 2𝑐1𝜌2𝑇 𝑏log(|K|)· © «vuuuut 2 log1 𝛿 +𝑑log© «1+𝑇+ 1+√ 𝑑𝑅 𝑏𝑇 𝜆𝑑ª®® ¬+√ 𝜆ª®®® ¬ =O(√︁ 𝑑𝑇log(𝑇)+𝑑), where𝑅and𝜌2are constants in Definition 1. Proof. Denote the instantaneous regret at round 𝑡byreg𝑡. We first decompose it as follows: reg𝑡=(𝒙⊤ 𝑎∗ 𝑡𝜽∗+𝜂𝑡)−(𝒙⊤ 𝑎𝑡𝜽∗+𝜂𝑡) =𝒙⊤ 𝑎∗ 𝑡(𝜽∗−𝜽𝑡)+(𝒙⊤ 𝑎∗ 𝑡𝜽𝑡+𝛼𝑡∥𝒙𝑎∗ 𝑡∥𝑴−1 𝑡)−(𝒙⊤ 𝑎𝑡𝜽𝑡+𝛼𝑡∥𝒙𝑎𝑡∥𝑴−1 𝑡) +𝒙⊤ 𝑎𝑡(𝜽𝑡−𝜽∗)−𝛼𝑡∥𝒙𝑎∗ 𝑡∥𝑴−1 𝑡+𝛼𝑡∥𝒙𝑎𝑡∥𝑴−1 𝑡 ≤𝒙⊤ 𝑎∗ 𝑡(𝜽∗−𝜽𝑡)+𝒙⊤ 𝑎𝑡(𝜽𝑡−𝜽∗)−𝛼𝑡∥𝒙𝑎∗ 𝑡∥𝑴−1 𝑡+𝛼𝑡∥𝒙𝑎𝑡∥𝑴−1 𝑡(15) ≤𝛼𝑡∥𝒙𝑎∗ 𝑡∥𝑴−1 𝑡+𝛼𝑡∥𝒙𝑎𝑡∥𝑴−1 𝑡−𝛼𝑡∥𝒙𝑎∗ 𝑡∥𝑴−1 𝑡+𝛼𝑡∥𝒙𝑎𝑡∥𝑴−1 𝑡(16) ≤2𝛼𝑡∥𝒙𝑎𝑡∥𝑴−1 𝑡, where Equation (15) follows from the UCB strategy for arm selec- tion, and Equation (16) follows from Lemma 1. Next, we have R(𝑇)=𝑇0∑︁ 𝑡=1reg𝑡+𝑇∑︁ 𝑡=𝑇0+1reg𝑡 Leveraging the Power of Conversations: Optimal Key Term Selection in Conversational Contextual Bandits KDD
|
https://arxiv.org/abs/2505.21393v1
|
’25, August 3–7, 2025, Toronto, ON, Canada ≤𝑇0+𝑇∑︁ 𝑡=𝑇0+12𝛼𝑡∥𝒙𝑎𝑡∥𝑴−1 𝑡(17) ≤𝑇0+2𝑇∑︁ 𝑡=𝑇0+1𝛼𝑡√︄ 2 𝜆K𝑏𝑡(18) ≤𝑇0+4𝛼𝑇√︄ 2𝑇 𝜆K𝑏(19) where Equation (17) is because the instantaneous regret reg𝑡≤1 by Assumption 1, Equation (18) follows from Lemma 4, and Equa- tion (19) is because 𝛼𝑡is non-decreasing andÍ𝑇 𝑡=11√𝑡≤2√ 𝑇. Recall the definition of 𝑇0≜8(1+√ 𝑑𝑅)2 𝑏𝜆Klog 𝑑 𝛿 in Lemma 3 and the definition of 𝛼𝑡in Lemma 1. Plugging 𝑇0and𝛼𝑡into Equa- tion (19), we can obtain the regret bound. □ A.10 Proof of Theorem 2 Theorem 2 (Regret of CLiME) .With probability at least 1−𝛿for some𝛿∈(0,1), the regret upper bound of CLiME satisfies R(𝑇)≤4√︂ 2𝑇 𝛼© «vut 2 log(1 𝛿)+𝑑log 1+𝑇+𝛼𝑑𝑇 𝜆𝑑𝑐2 0! +√ 𝜆ª® ¬+2𝑃 =O√︁ 𝑑𝑇log(𝑇) . Proof. With the same decomposition as in the proof of Theo- rem 1, we have R(𝑇)=2𝑃∑︁ 𝑡=1reg𝑡+𝑇∑︁ 𝑡=2𝑃+1reg𝑡≤2𝑃+2𝑇∑︁ 𝑡=2𝑃+1𝛼𝑡∥𝒙𝑎𝑡∥𝑴−1 𝑡 ≤2𝑃+2𝑇∑︁ 𝑡=2𝑃+1𝛼𝑡√︂ 2 𝛼𝑡(20) ≤2𝑃+4𝛼𝑇√︂ 2𝑇 𝛼. (21) =2𝑃+4√︂ 2𝑇 𝛼© «vut 2 log1 𝛿 +𝑑log 1+𝑇+𝛼𝑑𝑇 𝜆𝑑𝑐2 0! +√ 𝜆ª® ¬,(22) where Equations (20) and (21) follow from Lemma 5 and analogous steps in Theorem 1. Note that 𝑃>1is a given constant for the “Fixed Interval Checking” function. Plugging 𝛼𝑇into the inequality, we can obtain the result and conclude that R(𝑇)=O(√︁ 𝑑𝑇log(𝑇)).□ A.11 Proof of Theorem 3 Since any algorithms for conversational bandits must select both arms and key terms, we model a policy 𝜋as a tuple consisting of two components 𝜋=(𝜋arm,𝜋key), where𝜋armselects arms and 𝜋keyselects key terms. We assume that at each time step, the policy can select at most one key term; otherwise, the number of key terms could exceed the number of arms, which is impractical. Let H𝑡= {𝑎1,𝑥1,𝑘1,e𝑥1,...,𝑎𝑡,𝑥𝑡,𝑘𝑡,e𝑥𝑡}denote the history of interactions between the policy and the environment up to time 𝑡. We note that the presence of key terms at every time step in H𝑡is without lossof generality because we allow 𝑘𝑡to be empty if no conversation is initiated at round 𝑡. The noise terms associated with both arm- level and key term-level feedback, denoted by 𝜂𝑡ande𝜂𝑡, follow the standard Gaussian distribution N(0,1). We also denote the feature vectors of selected arm and key term as random variables 𝑨𝑡,𝑲𝑡∈ R𝑑, and the arm-level and key term-level rewards 𝑋𝑡=⟨𝑨𝑡,𝜽⟩+𝜂𝑡 ande𝑋𝑡=⟨𝑲𝑡,𝜽⟩+e𝜂𝑡, followN(⟨𝑨𝑡,𝜽⟩,1)andN(⟨𝑲𝑡,𝜽⟩,1), respectively. We denote by P𝜽the probability measure induced by the environment 𝜽and policy𝜋, and by E𝜽the expectation under P𝜽. With these definitions, we present the following lemma. Lemma 7. Let𝐷(𝑃∥𝑄)denote the KL divergence between distri- butions𝑃and𝑄, and let 𝜽,𝜽′be two environments, then we have 𝐷(P𝜽∥P𝜽′)=1 2𝑇∑︁ 𝑡=1 E𝜽h 𝑨𝑡,𝜽−𝜽′ 2i +E𝜽h 𝑲𝑡,𝜽−𝜽′ 2i . Proof. Given a bandit instance with parameter 𝜽and a policy 𝜋, according to Section 4.6 of Lattimore and Szepesvári [14], we construct the canonical bandit model of our setting as follows. Let (Ω,F,P𝜽)be a probability space and Abe the set of all possible arms, where Ω=(A×R)𝑇,F=B(Ω), and the density function of the probability measure P𝜽is defined by 𝑝𝜽,𝜋:Ω→R: 𝑝𝜽(H𝑇)=𝑇Ö 𝑡=1𝜋arm 𝑡(𝑎𝑡|H𝑡−1)𝑝𝑎𝑡(𝑥𝑡)·𝜋key 𝑡(𝑘𝑡|H𝑡−1)e𝑝𝑘𝑡(e𝑥𝑡), where𝑝𝑎𝑡ande𝑝𝑘𝑡are the density functions of arm-level and key term-level reward distributions 𝑃𝑎𝑡ande𝑃𝑘𝑡, respectively. The def- inition of P𝜽′is identical except that 𝑝𝑎𝑡,e𝑝𝑘𝑡are replaced by 𝑝′𝑎𝑡, e𝑝′ 𝑘𝑡and𝑃𝑎𝑡,e𝑃𝑘𝑡are replaced by 𝑃′𝑎𝑡,e𝑃′ 𝑘𝑡. By the definition
|
https://arxiv.org/abs/2505.21393v1
|
of KL divergence 𝐷(𝑃∥𝑄)=∫ Ωlog d𝑃 d𝑄 d𝑃, 𝐷(P𝜽∥P𝜽′)=∫ ΩlogdP𝜽 dP𝜽′ dP𝜽=E𝜽 logdP𝜽 dP𝜽′ . Note that logdP𝜽 dP𝜽′(H𝑇) =log𝑝𝜽,𝜋(H𝑇) 𝑝𝜽′,𝜋(H𝑇)(23) =logÎ𝑇 𝑡=1𝜋arm 𝑡(𝑎𝑡|H𝑡−1)𝑝𝑎𝑡(𝑥𝑡)·𝜋key 𝑡(𝑘𝑡|H𝑡−1)e𝑝𝑘𝑡(e𝑥𝑡) Î𝑇 𝑡=1𝜋arm 𝑡(𝑎𝑡|H𝑡−1)𝑝′𝑎𝑡(𝑥𝑡)·𝜋key 𝑡(𝑘𝑡|H𝑡−1)e𝑝′ 𝑘𝑡(e𝑥𝑡) =𝑇∑︁ 𝑡=1 log𝑝𝑎𝑡(𝑥𝑡) 𝑝′𝑎𝑡(𝑥𝑡)+loge𝑝𝑘𝑡(e𝑥𝑡) e𝑝′ 𝑘𝑡(e𝑥𝑡)! . where in Equation 23 we used the chain rule for Radon–Nikodym derivatives, and in the last equality, all the terms involving the policy𝜋cancel. Therefore, 𝐷(P𝜽∥P𝜽′) =𝑇∑︁ 𝑡=1 E𝜽" log𝑝𝐴𝑡(𝑋𝑡) 𝑝′ 𝐴𝑡(𝑋𝑡)# +E𝜽" loge𝑝𝐾𝑡(e𝑋𝑡) e𝑝′ 𝐾𝑡(e𝑋𝑡)#! =𝑇∑︁ 𝑡=1 E𝜽" E𝜽" log𝑝𝐴𝑡(𝑋𝑡) 𝑝′ 𝐴𝑡(𝑋𝑡) 𝐴𝑡## +E𝜽" E𝜽" loge𝑝𝐾𝑡(e𝑋𝑡) e𝑝′ 𝐾𝑡(e𝑋𝑡) 𝐾𝑡##! =𝑇∑︁ 𝑡=1 E𝜽h 𝐷(𝑃𝐴𝑡∥𝑃′ 𝐴𝑡)i +E𝜽h 𝐷(e𝑃𝐾𝑡∥e𝑃′ 𝐾𝑡)i KDD ’25, August 3–7, 2025, Toronto, ON, Canada Maoli Liu, Zhuohua Li, Xiangxiang Dai, and John C.S. Lui =1 2𝑇∑︁ 𝑡=1 E𝜽h 𝑨𝑡,𝜽−𝜽′ 2i +E𝜽h 𝑲𝑡,𝜽−𝜽′ 2i . where the last equation uses Lemma 10 and the fact that 𝑃𝐴𝑡∼ N(⟨𝐴𝑡,𝜽⟩,1),𝑃′ 𝐴𝑡∼ N(⟨𝐴𝑡,𝜽′⟩,1),e𝑃𝐾𝑡∼ N(⟨𝐾𝑡,𝜽⟩,1), and e𝑃′ 𝐾𝑡∼N(⟨𝐾𝑡,𝜽′⟩,1), respectively. □ Next, we present a lower bound for conversational bandits but without imposing the constraint that the number of arms is 𝐾. Lemma 8. Let the arm set and the key term set A=K=[−1,1]𝑑 andΘ= ±√︃ 1 𝑇𝑑 , then for any policy, there exists an environ- ment 𝜽∈Θsuch that the expected regret satisfies: E𝜽[𝑅(𝑇)]≥ exp(−4) 4𝑑√ 𝑇. Proof. For any𝑖∈[𝑑]and𝜽∈Θ, defineE𝜽,𝑖as the event that the sign of the 𝑖-th coordinate of at least half of {𝑨𝑡}𝑇 𝑡=1does not agree with 𝜽:E𝜽,𝑖=Í𝑇 𝑡=1I{sign(𝑨𝑡𝑖)≠sign(𝜽𝑖)}≥𝑇 2 . Let𝑝𝜽,𝑖=P𝜽 E𝜽,𝑖and𝜽′=(𝜽1,...,𝜽𝑖−1,−𝜽𝑖,𝜽𝑖+1,...,𝜽𝑑)⊤, i.e.,𝜽′is the same as 𝜽except that the 𝑖-th coordinate is negated. It is easy to verify that E𝑐 𝜽,𝑖=E𝜽′,𝑖. Thus, applying Lemma 9 and Lemma 7, we obtain 𝑝𝜽,𝑖+𝑝𝜽′,𝑖≥1 2exp(𝐷(P𝜽∥P𝜽′)) =1 2exp −1 2𝑇∑︁ 𝑡=1 E𝜽h 𝑨𝑡,𝜽−𝜽′ 2i +E𝜽h 𝑲𝑡,𝜽−𝜽′ 2i! =1 2exp(−4), where the last equality follows from a straightforward calculation showing that⟨𝑨𝑡,𝜽−𝜽′⟩=⟨𝑨𝑡,𝜽−𝜽′⟩=4/𝑇. Since|Θ|=2𝑑, we have ∑︁ 𝜽∈Θ1 |Θ|𝑑∑︁ 𝑖=1𝑝𝜽,𝑖=1 |Θ|𝑑∑︁ 𝑖=1∑︁ 𝜽∈Θ𝑝𝜽,𝑖 =1 2𝑑·𝑑·2𝑑 2·1 2exp(−4)=𝑑 4exp(−4). This implies the existence of some 𝜽∗∈Θsuch that 𝑑∑︁ 𝑖=1𝑝𝜽∗,𝑖≥𝑑 4exp(−4). (24) Choosing this 𝜽∗and defining the optimal arm 𝒂∗as: 𝒂∗=arg max 𝒂∈A 𝒂,𝜽∗ =arg max 𝒂∈A𝑑∑︁ 𝑖=1𝒂∗ 𝑖𝜽∗ 𝑖. It is easy to verify that to maximizeÍ𝑑 𝑖=1𝒂∗ 𝑖𝜽∗ 𝑖, we must have 𝑎∗ 𝑖= sign(𝜽∗ 𝑖)for all𝑖∈[𝑑]. Therefore, the expected regret is at least E𝜽∗[𝑅(𝑇)]=E𝜽∗"𝑇∑︁ 𝑡=1 𝒂∗−𝑨𝑡,𝜽∗ # =E𝜽∗"𝑇∑︁ 𝑡=1𝑑∑︁ 𝑖=1(𝒂∗ 𝑖−𝑨𝑡𝑖)𝜽∗ 𝑖#=E𝜽∗"𝑇∑︁ 𝑡=1𝑑∑︁ 𝑖=1(sign(𝜽∗ 𝑖)−𝑨𝑡𝑖)𝜽∗ 𝑖# =E𝜽∗"𝑇∑︁ 𝑡=1𝑑∑︁ 𝑖=12I sign(𝑨𝑡𝑖)≠sign(𝜽∗ 𝑖) √︂ 1 𝑇# =2√︂ 1 𝑇𝑑∑︁ 𝑖=1E𝜽∗"𝑇∑︁ 𝑡=1I sign(𝑨𝑡𝑖)≠sign(𝜽∗ 𝑖) # ≥√ 𝑇𝑑∑︁ 𝑖=1P𝜽∗"𝑇∑︁ 𝑡=1I sign(𝑨𝑡𝑖)≠sign(𝜽∗ 𝑖) ≥𝑇/2# (25) =√ 𝑇𝑑∑︁ 𝑖=1𝑝𝜽∗,𝑖≥exp(−4) 4𝑑√ 𝑇, where Equation (25) uses Markov’s inequality, and the last inequal- ity follows from Equation (24). □ Theorem 3 (Regret lower bound) .For any policy that chooses at most one key term per time step, there exists an instance of the conversational bandit problem such that the expected regret is at least Ω(√ 𝑑𝑇). Furthermore, for any 𝑇=2𝑚with𝑚∈[𝑑], the regret is at leastΩ(√︁ 𝑑𝑇log(𝑇)). Proof. Suppose we have 𝛽=𝑑 𝑚smaller problem instances 𝐼1.𝐼2,...,𝐼𝛽, each corresponding to an 𝑚-dimensional, 𝐾-armed bandit instance with a horizon of 𝑇/𝛽and we assume they have preference vectors 𝜽1,...,𝜽𝛽∈R𝑚, respectively. We denote the arm set for instance 𝐼𝑗asA𝐼𝑗⊂R𝑚, and the regret incurred by instance 𝐼under policy 𝜋as𝑅𝜋 𝐼(𝑇). Next, we construct a 𝑑-
|
https://arxiv.org/abs/2505.21393v1
|
dimensional instance 𝐼=(𝐼1,𝐼2,...,𝐼𝛽)by leting the unknown preference vector for instance 𝐼be𝜽=(𝜽⊤ 1,...,𝜽⊤ 𝛽)⊤, and dividing the time horizon 𝑇into𝛽consecutive periods, each of length 𝑇/𝛽. For each time step 𝑡∈ [𝑇], the feature vectors of arms A𝑡are constructed from instance 𝐼𝑗, where𝑗=⌈𝑡𝛽/𝑇⌉. Specifically,A𝑡= (0⊤,..., 𝒙⊤,..., 0⊤)⊤ 𝒙∈A𝐼𝑗, where the non-zero entry is located at the𝑗-th block. This means that at time 𝑡, the learner can only get information about the 𝑗-th block of the preference vector 𝜽. Therefore for any policy 𝜋, there exists policies 𝜋1,...,𝜋𝛽such that 𝑅𝜋 𝐼(𝑇)=Í𝛽 𝑗=1𝑅𝜋𝑗 𝐼𝑗(𝑇 𝛽). Applying Lemma 8, we can always find instances𝐼1,𝐼2,...,𝐼𝛽such that 𝑅𝜋 𝐼(𝑇)=𝛽∑︁ 𝑗=1𝑅𝜋𝑗 𝐼𝑗(𝑇 𝛽)≥𝛽∑︁ 𝑗=1Ω 𝑚√︄ 𝑇 𝛽! =Ω 𝑚√︁ 𝑇𝛽 =Ω 𝑚√︂ 𝑇𝑑 𝑚! =Ω√ 𝑑𝑇𝑚 =Ω√︁ 𝑑𝑇log(𝑇) .□ A.12 Technical Inequalities We present the technical inequalities used throughout the proofs. We provide detailed references for readers’ convenience. Lemma 9 (Bretagnolle and Huber [3]).Let𝑃and𝑄be probability measures on the same measurable space (Ω,F), and let𝐴∈F be an arbitrary event. Then, 𝑃(𝐴)+𝑄(𝐴𝑐)≥1 2exp(−𝐷(𝑃∥𝑄)), Leveraging the Power of Conversations: Optimal Key Term Selection in Conversational Contextual Bandits KDD ’25, August 3–7, 2025, Toronto, ON, Canada where𝐷(𝑃∥𝑄)=∫ Ωlog d𝑃 d𝑄 d𝑃=E𝑃h logd𝑃 d𝑄i is the KL diver- gence between 𝑃and𝑄.𝐴𝑐=Ω\𝐴is the complement of 𝐴. Lemma 10 (KL divergence between Gaussian distributions) .If 𝑃∼N(𝜇1,𝜎2)and𝑄∼N(𝜇2,𝜎2), then 𝐷(𝑃∥𝑄)=(𝜇1−𝜇2)2 2𝜎2. Lemma 11 (Matrix Chernoff, Corollary 5.2 in Tropp [24]).Consider a finite sequence{𝑿𝑘}of independent, random, self-adjoint matrices with dimension 𝑑. Assume that each random matrix satisfies 𝑿𝑘⪰0and𝜆max(𝑿𝑘)≤𝑅almost surely. Define 𝒀:=∑︁ 𝑘𝑿𝑘and𝜇min:=𝜆min(E[𝒀])=𝜆min ∑︁ 𝑘E[𝑿𝑘]! .Then, for any 𝛿∈(0,1), Pr" 𝜆min ∑︁ 𝑘𝑿𝑘! ≤(1−𝛿)𝜇min# ≤𝑑" 𝑒−𝛿 (1−𝛿)1−𝛿#𝜇min/𝑅 . Lemma 12 (Determinant-trace inequality, Lemma 10 in Abbasi-Yad- kori et al . [1]).Suppose 𝑿1,𝑿2,..., 𝑿𝑡∈R𝑑and for any 1≤𝑠≤𝑡, ∥𝑿𝑠∥2≤𝐿. Let𝑽𝑡=𝜆𝑰+Í𝑡 𝑠=1𝑿𝑠𝑿⊤𝑠for some𝜆>0. Then, det(𝑽𝑡)≤ 𝜆+𝑡𝐿2 𝑑𝑑 .
|
https://arxiv.org/abs/2505.21393v1
|
arXiv:2505.21396v1 [cs.CL] 27 May 2025Improving Research Idea Generation Through Data: An Empirical Investigation in Social Science Xiao Liu1, Xinyi Dong2, Xinyang Gao3, Yansong Feng1∗and Xun Pang4,5,6∗ 1Wangxuan Institute of Computer Technology2Yuanpei College3School of Government 4School of International Studies5Institute for Carbon Neutrality 6Analytics Lab for Global Risk Politics Peking University {lxlisa,xinyang0614,fengyansong,xpang}@pku.edu.cn dongxy@stu.pku.edu.cn Abstract Recent advancements in large language mod- els (LLMs) have shown promise in generating novel research ideas. However, these ideas of- ten face challenges related to feasibility and expected effectiveness. This paper explores how augmenting LLMs with relevant data dur- ing the idea generation process can enhance the quality of generated ideas. We introduce two ways of incorporating data: (1) providing metadata during the idea generation stage to guide LLMs toward feasible directions, and (2) adding automatic validation during the idea se- lection stage to assess the empirical plausibility of hypotheses within ideas. We conduct experi- ments in the social science domain, specifically with climate negotiation topics, and find that metadata improves the feasibility of generated ideas by 20%, while automatic validation im- proves the overall quality of selected ideas by 7%. A human study shows that LLM-generated ideas, along with their related data and valida- tion processes, inspire researchers to propose research ideas with higher quality. Our work highlights the potential of data-driven research idea generation, and underscores the practical utility of LLM-assisted ideation in real-world academic settings. 1 Introduction Recent advances in large language models (LLMs) have demonstrated their potential to generate domain-specific research ideas, with some stud- ies suggesting that these ideas can exhibit greater novelty than those proposed by human experts (Si et al., 2024; Yamada et al., 2025). However, many LLM-generated ideas suffer from practical limita- tions: they may be infeasible to implement, lack suitable datasets for validation, or have uncertain effectiveness. For instance, an LLM might propose investigating “the impact of diplomats’ childhood *Corresponding authors.environmental experiences on their bargaining po- sitions in UN climate negotiations” , which is an interesting idea but lacks available data for empiri- cal analysis. Intuitively, if LLMs are provided with relevant datasets, they could be better equipped to gener- ate empirically grounded research ideas: those that are not only novel but also feasible for experimen- tation. Just as human researchers navigate trade- offs between theoretical ambition and empirical tractability when developing research ideas, LLMs could benefit from this balancing act when data is available. For example, if the LLM is aware of the existence of records on climate conference at- tendance, it might propose a more feasible study, like“how the professional backgrounds of diplo- mats influence their countries’ emission reduction commitment ambitions. ” In this paper, we investigate whether augmenting LLMs with data during the research idea genera- tion process can enhance the quality of generated ideas . Data can not only help LLMs to generate more feasible ideas, but also enable preliminary validation of hypotheses within ideas. With ac- cess to relevant datasets, LLMs can write code to analyze the data and perform reasoning to assess whether the hypotheses are supported by the avail- able evidence. Although this validation
|
https://arxiv.org/abs/2505.21396v1
|
is prelim- inary and does not guarantee sound conclusions, it provides valuable signals regarding whether the ideas are likely to be effective . As shown in Figure 1, the standard framework for LLM ideation consists of three stages: liter- ature search, idea generation, and idea selection. Models first search related literature for a given topic, then generate ideas based on the retrieved literature, and finally rank and select the top ideas as the output. We enhance this framework by in- corporating data at two key stages: (1) During idea generation, we provide metadata, such as dataset descriptions, to guide models toward feasible re- Related LiteratureResearch Topic Literature SearchResearch Question Theory HypothesesResearch Idea Idea Generation Idea Selection Top Ideas Metadata ( §3)Automatic Validation ( §4)Inspire Researchers (§5)Figure 1: Overview of how we incorporate data into the research idea generation process. search directions; and (2) during idea selection, we integrate automatic validation to account for the empirical plausibility of the proposed hypotheses within ideas. We conduct experiments in the domain of social science, focusing specifically on topics related to climate negotiations. To support the experiments, we first collect and gather relevant datasets into a unified CLIMATE DATABANK. To evaluate the impact of metadata, we compare LLM-generated ideas with and without access to the metadata of CLIMATE DATABANK, and observe that incorpo- rating metadata improves the feasibility by 20% and the expected effectiveness by 18% in human evaluation. Additionally, we find that automatic validation improves the accuracy of idea ranking by an average of 8%, and ideas selected with val- idation are rated 7% higher in human evaluation compared to those selected without validation. Beyond assessing the quality of generated ideas, we explore whether LLM-generated ideas, along with their related data and validation processes, can inspire human researchers to develop their own ideas . In a study with 23 researchers, we find that compared to traditional idea creation aided only by the Internet, participants propose ideas of higher quality when given the reference of LLM-generated ideas. Feedback from participants indicates that LLM-generated ideas and validation processes are very helpful, with some researchers using them as starting points for further refinement, which helps broaden their thinking. Our contributions are as follows: (1) We propose two ways of integrating data into research idea gen- eration: adding metadata in idea generation and adding automatic validation in idea selection. (2) Our experiments demonstrate that metadata and automatic validation improve the quality of gener- ated ideas, particularly in feasibility and expected effectiveness. (3) The human study reveals thatLLM-generated ideas can inspire researchers to propose higher-quality ideas. (4) We construct the CLIMATE DATABANK to support future work in data-driven research idea generation. 2 Data Collection CLIMATE DATABANK Construction We first collect data related to climate negotiations, con- structing the CLIMATE DATABANK to facilitate the following experiments. Our process begins with a comprehensive literature review, identifying impor- tant and commonly used datasets. We then collect datasets of common variables from World Bank Open Data *, and other datasets from their original sources. TheCLIMATE DATABANK is composed of three primary types of data:
|
https://arxiv.org/abs/2505.21396v1
|
(1) Textual data, which in- cludes documents such as national communications and high-level statements issued by various coun- tries, enabling both qualitative analysis and text mining. (2) Panel data, such as the Gross Domestic Product (GDP) of each country over time, facili- tating longitudinal analysis of trends over multiple years. (3) Cross-sectional data, capturing static attributes such as membership in the Alliance of Small Island States (AOSIS), with all values stan- dardized to the year 2025 for consistency. CLIMATE DATABANK contains 22 datasets in total, each stored in CSV format for ease of access. The full list of datasets and corresponding data descriptions is in Appendix Table 6. Reference Paper Collection During the litera- ture review, we also collect papers with clear hy- potheses and replicable data. After manually re- viewing 103 papers, we identify 8 papers that meet these criteria, as shown in Appendix Table 7. These papers, along with the corresponding data, are used in §4, where models are asked to validate the hy- *https://data.worldbank.org/ potheses in these papers, and rank the ground-truth ideas among LLM-generated ideas. 3Can Metadata Benefit Idea Generation? This section explores the role of metadata in idea generation. We first describe how social science research ideas are structured and generated, then explain how metadata is integrated into the gener- ation process. Finally, we present both automatic and human evaluation results. 3.1 Social Science Idea Generation A typical social science research idea consists of three components: a research question rq, a theory th, and several hypotheses h(Powner, 2014; King et al., 1994). As illustrated in the example of Fig- ure 2 (right), the research question guides the study by identifying the central issue to be explored. The theory speculates on the answer to the research question and explains why the proposed answer is reasonable. The hypotheses identify observable implications of the theory, i.e., things we would observe if the theory is correct. Given a research topic t, LLMs first conduct a literature search and retrieve related literature L, and then generate research ideas with the compo- nents (rq, th,h)through the idea generation stage. The generated ideas are then passed to the idea selection stage to select the top-ranked ideas. 3.2 Incorporating Metadata into Idea Generation Figure 2 (left) shows how we incorporate meta- data, which is concise dataset descriptions, into the idea generation prompt along with the topic and related literature. Each metadata entry summarizes a dataset in CLIMATE DATABANK with one or two sentences, including information like the meaning of key variables, temporal coverage, and spatial scope. The prompt informs LLMs that here are existing data related to this topic , without strict restriction on using the provided data. This ensures that models can balance theoretical creativity with empirical feasibility by themselves. By exposing models to metadata early, we en- courage data-informed ideation where the feasibil- ity of measurement is considered. Note that in this stage, we provide only the metadata, not the real content of the data, avoiding models from conduct- ing data dredging by finding patterns in data and disguising them as hypotheses.3.3 Experimental
|
https://arxiv.org/abs/2505.21396v1
|
Setup Research Topics We generate 10 climate negotiation-related research topics using GPT- 4o (Hurst et al., 2024), and manually verify them to ensure their quality. The created topics are in Appendix Table 8. Methods We experiment with three prevalent re- search idea generation methods: AI-Researcher (Si et al., 2024), GPT-Researcher (Elovic, 2023), and Chain-of-Ideas (Li et al., 2024a). Each method first retrieves relevant literature and then generates ideas, with detailed descriptions in Appendix C. We preserve each method’s original design while adding metadata to the idea generation prompt. For each research topic, we generate 50 ideas per method, then select the top 5 for evaluation using a unified idea selection module. Idea Selection As Si et al. (2024) demonstrate that LLMs better assess ideas in pairwise ranking than rating, we conduct a Swiss tournament for idea selection. Over 5 rounds, ideas are paired by similar accumulated scores, with LLMs ranking each pair using the criteria below. Idea Evaluation We assess idea quality using four criteria motivated by previous works (Si et al., 2024; Yang et al., 2024b): significance, novelty, feasibility, and expected effectiveness (abbreviated as exp. effectiveness). These criteria are used in both idea selection and evaluation, with detailed definitions in Appendix D. For automatic evalua- tion, we conduct tournament ranking and compute ELO scores following Idea Arena (Li et al., 2024a). Implementation Details We use GPT- 4o ( gpt-4o-2024-08-06 ) for idea gener- ation and selection, and use Gemini-1.5- Pro (Team et al., 2024) ( gemini-1.5-pro-002 ) and Claude-3.5-Sonnet (Anthropic, 2024) (claude-3-5-sonnet-20241022 ) as judge mod- els.†More implementation details, prompts, and human evaluation details are in Appendices E, H, and I, respectively. 3.4 Results Automatic Evaluation Figure 3 shows that meta- data improves average ratings across all methods, suggesting that incorporating metadata enhances the overall quality of generated research ideas. Expected effectiveness consistently benefits from †These model versions are used throughout the paper. Related LiteratureResearch Topic: Adaptation vs. Mitigation Focus of Climate NegotiationsResearch Question : How does membership in international coalitions shape national adaptation -mitigation strategies in climate negotiations?Research Idea Textual Data 1. National communications: National communications submitted by countries every four years or eight years, outlining their efforts to address climate change. 2. …Metadata: ClimateDataBank Panel Data 1. CO2 emissions: …Cross- Sectional Data 1. AOSIS member: …Theory : International coalitions play a significant role in shaping the climate negotiation strategies of member countries. For example, AOSIS members, being small island states, are more vulnerable to climate change and thus prioritize adaptation. … Hypotheses : 1. AOSIS members are more likely to emphasize adaptation in their national communications compared to non- members. 2. …Figure 2: An example of metadata provided during idea generation, alongside a generated research idea. Average Significance Novelty FeasibilityExpected Effectiveness 800900100011001200 Gemini-1.5-Pro as the Judge Average Significance Novelty FeasibilityExpected Effectiveness 800900100011001200 Claude-3.5-Sonnet as the JudgeAI-Researcher GPT-ResearcherChain-of-Ideas AI-Researcher w. MetadataGPT-Researcher w. Metadata Chain-of-Ideas w. Metadata Figure 3: Automatic evaluation results of ideas generated with (w.) and without metadata. A tabular version is in Appendix Table 10. metadata, with feasibility and significance also im- proving in most cases, demonstrating metadata’s
|
https://arxiv.org/abs/2505.21396v1
|
role in generating more empirically grounded and impactful ideas. However, novelty declines for AI- Researcher and Chain-of-Ideas when evaluated by Claude, indicating that data-aware generation may limit the generation of highly unconventional ideas. w. Metadata Tie w/o Metadata Significance 38.8 22.4 38.8 Novelty 42.6 14.0 43.4 Feasibility 46.5 27.1 26.4 Exp. Effectiveness 51.2 16.3 32.5 Overall 43.4 14.7 41.9 Table 1: Human comparison results of ideas generated by GPT-Researcher with and without metadata ( %). Human Evaluation We perform human evalua- tion on GPT-Researcher’s output, as this method achieves high rankings in the automatic evaluation. We recruit human annotators at the graduate levelor above, with academic backgrounds in social sci- ence. For the 50 idea pairs (5 ideas per topic across 10 topics) generated by GPT-Researcher with and without metadata, each pair is annotated by at least two participants. In addition to the four evaluation criteria, we also ask annotators to assess the overall quality of research ideas. As shown in Table 1, the human evaluation re- sults align with the automatic evaluation trends. The integration of metadata leads to substantial improvements in feasibility (20%) and expected effectiveness (18%), though novelty experiences a modest decrease. The overall assessment score increases by 1.5%, suggesting that while metadata strengthens specific quality dimensions, the holis- tic assessment of research ideas involves balancing multiple criteria. Appendix G demonstrates an ex- ample where incorporating metadata improves the feasibility of generated idea, thus improving the overall quality of the idea. Domain # Papers # Hypotheses Accuracy ( %) Diverse 20 100 78.0 Climate 8 18 72.2 (a) Accuracy of the hypothesis validation results.Choice Ratio ( %) Mostly Validate the Hypothesis 50 Partially Validate the Hypothesis 40 Does not Help Validating 10 (b) Human evaluation results of the validation processes. Knowledge Recall Data Analysis Reasoning Code Generation Result Interpretation 30.0 63.3 30.0 33.3 (c) Error analysis of the validation processes. Numbers indicate the ratio of validation processes that encounter the error ( %). Table 2: Pilot study results on LLM performance in automatic hypothesis validation. The LLM used is GPT-4o with the code interpreter assistant. 4 Can Automatic Validation Benefit Idea Selection? This section investigates the role of automatic vali- dation in idea selection through: (1) a pilot study assessing the hypothesis validation capabilities of LLMs, (2) a reference-based evaluation of idea se- lection performance, and (3) a human evaluation comparing ideas selected with and without the vali- dation process. 4.1 Pilot Study: Can LLMs conduct preliminary hypothesis validation? We assess LLMs’ ability to validate hypotheses from published papers. Experimental Setup We extract 18 hypotheses from the 8 domain-specific papers collected in §2, of which 10 are supported and 8 are refuted. Hy- potheses with insignificant or mixed evidence are excluded. To expand the experimental scope, we sample 50 hypotheses from DiscoveryBench (Ma- jumder et al., 2024b), drawn from 20 papers across diverse fields like humanities, sociology, and eco- nomics. Since all these hypotheses are supported in the original papers, we create 50 negative hy- potheses by modifying their variables or relations to balance the evaluation dataset. We experiment
|
https://arxiv.org/abs/2505.21396v1
|
with GPT-4o using the code in- terpreter assistant, a built-in tool available in GPT models. It achieves superior performance in quanti- tative reasoning with data (Liu et al., 2024b), while more advanced methods can also be employed in the future. We input the hypotheses along with their corresponding data into the model. The model en- gages in multi-turn interactions to write and run Python code in a sandbox environment, to validate whether the hypotheses are supported. Results We begin by evaluating whether the LLM’s validation results align with the conclusionspresented in the original papers. As shown in Ta- ble 2a, the model achieves over 70% accuracy on both general-domain and domain-specific hypothe- ses. To assess whether this performance stems from memorization, we compare it to a memoriza- tion baseline, where the LLM is asked to predict whether the hypotheses are supported without ac- cess to the data. Under this setting, the model correctly predicts 65% of DiscoveryBench cases and 55% of climate negotiation cases. Hypothesis validation with data surpasses the memorization baseline by a substantial margin ( ≥13%), suggest- ing that the LLM exhibits a meaningful capacity for hypothesis validation. To evaluate the quality of the validation pro- cesses, we conduct a human evaluation, asking two domain experts to review the validation steps for 15 hypotheses drawn from 6 sampled climate negotiation papers, with the annotation interface in Appendix Figure 6. As shown in Table 2b, half of the validation processes mostly support the hy- potheses with only minor flaws, while another 40% partially align with the hypotheses but raise signifi- cant concerns, such as insufficient control variables. The error analysis in Table 2c reveals that data anal- ysis, particularly involving textual data, is the most challenging aspect for the model. Other common issues, including knowledge recall, reasoning code generation, and result interpretation, also occur in approximately 30% cases. Despite the imperfections, annotators note that the automatic validation is helpful as an auxiliary tool for exploratory research , which aligns well with our intended use of the validation process: as a reference during idea selection. Judge Model for Idea SelectionRaw Reasoning Trace … (data loading omitted) …Preliminary Automatic Validation Feasibility CheckFeasibility: YesData Used: national_communications.csv, highlevel_statements.csv, aosis.csv Output: AOSIS 0.0 0.8741.0 0.885 Research QuestionResearch Idea Hypotheses : 1. AOSIS members are more likely to emphasize adaptation in their national communications compared to non -members. 2. …Theory Summarization … Adaptation is frequently mentioned by both AOSIS and non- AOSIS members, slightly more so by AOSIS. Further statistical significance testing would be required for a robust conclusion. Figure 4: An example of incorporating automatic validation into idea selection. We check the feasibility of hypotheses in ideas, conduct automatic validation of feasible hypotheses, and provide the ideas together with the summarized validation processes to the judge model. 4.2 Incorporating Automatic Validation into Idea Selection Figure 4 illustrates how we conduct automatic vali- dation of generated ideas and incorporate the vali- dation results into the idea selection process. Feasibility Check For each idea, we first assess the feasibility of validating its hypotheses. The idea, together with metadata
|
https://arxiv.org/abs/2505.21396v1
|
from all available datasets, is provided to an LLM. The model is prompted to determine whether the hypotheses can be tested using the provided datasets and to identify which datasets would be used for the validation. Specifically, the datasets are indexed numer- ically. If the model judges the hypotheses to be testable, it outputs the indices of the selected datasets along with a corresponding validation plan. Given the difficulty for LLMs to handle complex data analysis, the number of selected datasets is limited to a maximum of three, with no more than one being a textual dataset. Hypothesis Validation If the idea is deemed testable, the corresponding datasets are then pro- vided to the LLM for validation. Similar to the pilot study, the model generates a validation trace consisting of code snippets, intermediate execution outputs, and textual interpretations. Validation Process Summarization The raw reasoning traces may be verbose and sometimes contain noise, such as trial-and-error in code ex- ecution. To make the output more interpretable and useful for downstream selection, we prompt the LLM to summarize the full validation process into concise natural language steps, including the crucial reasoning process and results that lead to the final conclusion. The summarized validationprocesses along with the ideas are then provided to the judge model for idea selection. We use GPT-4o for both the feasibility check and validation process summarization, and GPT-4o with the code interpreter assistant for hypothesis validation. Implementation details and prompts are in Appendices E and H. A detailed case of the automatic validation process is in Appendix G. 4.3 Reference-based Automatic Evaluation We evaluate the impact of validation on idea se- lection performance, following the setup of Re- searchBench (Liu et al., 2025). We prompt LLMs to perform pairwise ranking between ground-truth ideas (extracted from academic papers) and LLM- generated ideas, and compare ranking accuracy with and without access to validation processes. For each of the 8 climate negotiation papers we collected, we manually extract the research topic and use GPT-Researcher to generate 10 ideas on the same topic, provided with the correspond- ing dataset description. We then perform auto- matic validation on both the ground-truth and LLM- generated ideas, and ask judge LLMs to pairwisely compare the ground-truth ideas with the generated ideas under the same topic. Accuracy is defined as the proportion of comparisons where the ground- truth idea is ranked higher. To mitigate position bias, each pair is evaluated twice with reversed positions, and the results are averaged. We use Gemini-1.5-Pro and Claude-3.5-Sonnet as judge LLMs. Results are shown in Table 3. For both models, incorporating validation leads to consistently higher average ranking accuracy compared to the setting without validation. Im- provements are particularly notable in the feasibil- ity and expected effectiveness dimensions, while Judge Model w. Validation Significance Novelty Feasibility Exp. Effectiveness Average Gemini-1.5-Pro✗ 69.9 71.3 29.7 56.7 56.9 ✓ 67.3 65.8 55.6 60.6 62.3 Claude-3.5-Sonnet✗ 89.4 82.5 20.1 83.8 69.0 ✓ 88.1 86.9 46.9 93.6 78.9 Table 3: Accuracy of judge models in ranking ground-truth ideas among LLM-generated ideas ( %). Better results of each judge
|
https://arxiv.org/abs/2505.21396v1
|
model are in bold. a slight decrease is observed in the judgment of significance and novelty. 4.4 Human Evaluation w. Validation Tie w/o Validation Significance 37.5 27.5 35.0 Novelty 45.0 21.7 33.3 Feasibility 40.0 33.3 26.7 Exp. Effectiveness 43.3 27.5 29.2 Overall 42.5 21.7 35.8 Table 4: Human comparison results of ideas selected by Claude-3.5-Sonnet with and without validation pro- cesses ( %). We then conduct a human evaluation comparing LLM-generated ideas selected with and without validation processes. For the 50 ideas generated by GPT-Researcher in §3 on each research topic, we use Claude-3.5-Sonnet, which performs better in the reference-based evaluation, to select the top 5 ideas in two settings: (1) based on the idea content alone, and (2) based on both the idea and its val- idation process. Human annotators then perform pairwise evaluations of the two sets, using the same evaluation setup as described in §3.4. As shown in Table 4, ideas selected with vali- dation processes are ranked higher across all di- mensions, with the largest improvement observed in feasibility. This aligns with the reference-based evaluation results and suggests that validation pro- cesses provide a valuable signal for enhancing idea selection. 5Human Study: Are the LLM Generated Ideas Inspiring to Researchers? Beyond evaluating idea quality, we are interested in whether LLM-generated ideas can be useful in real-world academic settings. We conduct a hu- man study to investigate whether ideas generated by LLMs, along with related data and validation processes, can inspire researchers to formulate their own research ideas.5.1 Experiment Design We recruit 23 participants from a social science course to take part in the study. Among them, 19 are undergraduate or graduate students, and the re- maining 4 are more senior researchers holding a PhD in a related field. Participants are presented with four research topics related to climate nego- tiations and asked to select two topics they are personally interested in. For each selected topic, they are asked to propose a research idea. For one of the two topics, participants are pro- vided with three reference ideas, accompanied by data snippets used in automatic validation and the validation processes. The reference ideas are from the experiment of §4.4, generated by GPT- Researcher with metadata and selected by Claude- 3.5-Sonnet based on validation processes. For each idea, we present the first 10 lines of the datasets used during validation. Both the raw validation traces and the summarized versions are provided, and participants may choose which format to con- sult. For the other topic, participants are not given any references. They are allowed to browse the Internet and search for literature but are not permit- ted to use LLMs. Additional details including the experiment interface are provided in Appendix I. 5.2 Results Quality of Ideas We ask human experts to evalu- ate the quality of the ideas proposed by participants, using the same evaluation setup as §3.4. Since the number of ideas proposed with and without ref- erences may differ for a given topic, we first pair ideas from both settings one-to-one. For any excess ideas in one setting, we
|
https://arxiv.org/abs/2505.21396v1
|
randomly sample additional ideas from the other to complete the set of pairs. As shown in Table 5a, ideas proposed with refer- ences demonstrate higher overall quality. Specifi- cally, improvements are observed in novelty, feasi- bility, and expected effectiveness. Feedback from Participants To understand whether participants find the references helpful w. Reference Tie w/o Reference Significance 39.1 21.7 39.1 Novelty 43.5 23.9 32.6 Feasibility 50.0 17.4 32.6 Exp. Effectiveness 39.1 28.3 32.6 Overall 39.1 28.3 32.6 (a) Human comparison results of ideas proposed by partici- pants with vs. without access to references (including LLM- generated ideas, related data and validation processes).Helpfulness High Medium Not Helpful Reference Ideas 61.1 33.3 5.6 Data Segments 33.3 50.0 16.7 Validation Processes 55.5 38.9 5.6 (b) Participant feedback on the helpfulness of reference ideas, data segments, and validation processes for generat- ing their own research ideas. High and medium helpfulness correspond to the very helpful andsomewhat helpful options, respectively. Table 5: Human study results on the inspirational value of LLM-generated ideas. Numbers are in percentages ( %). and how they use them, we collect self-reported feedback. Participants are asked to rate the help- fulness of the reference ideas, data segments, and validation processes separately, using a three-point scale: very helpful ,somewhat helpful , and not helpful . As shown in Table 5b, all three components are generally found helpful by most participants. Ref- erence ideas and validation processes are rated as very helpful by more than half of the participants. The data segments receive relatively lower ratings, likely because raw data often requires additional interpretation or context to be fully understood, whereas ideas and validation outputs provide more immediately actionable guidance. Several participants provide detailed feedback. One student notes they build their own idea by ex- tending the most interesting reference idea , while another mentions that the concepts and measure- ments in the references help refine their own re- search direction . A professor also remarks that the references served as useful shortcuts and they can revise upon them . These insights highlight how LLM-generated references support researchers based on their background and research stage. 6 Related Work Research Idea Generation There is growing in- terest in leveraging LLMs for research idea gener- ation, either as a standalone task (Si et al., 2024; Baek et al., 2024) or as part of an end-to-end auto- mated research pipeline (Li et al., 2024b; Lu et al., 2024; Jansen et al., 2025). The former line of work focuses on enhancing the literature search and idea generation stages, typically generating ideas grounded in prior literature (Wang et al., 2024; Yang et al., 2024a). The latter line of work pro- poses more comprehensive frameworks that extend to later stages, such as experiment design, execu- tion, and paper writing.Our work aligns more closely with the first line, but novelly incorporates data into the idea gener- ation process. While our work also involves code generation and execution for hypothesis validation, this is not intended as rigorous experimental verifi- cation, but serves as a preliminary signal to support idea selection. Hypothesis Generation A related
|
https://arxiv.org/abs/2505.21396v1
|
but distinct line of work focuses on hypothesis generation, where models generate hypotheses to explain phe- nomena given access to data, like inducing rules from observations (Zhong et al., 2023; Qiu et al., 2024). Studies in this area explore data-driven methods (Majumder et al., 2024a; Zhou et al., 2024) or integrate literature with data (Liu et al., 2024a), but their goal is to uncover patterns in ex- isting datasets, contrasting with our objective of generating high-quality research ideas. 7 Conclusion and Discussion Our study shows that incorporating data into re- search idea generation, through metadata and au- tomatic validation, improves the overall quality of research ideas generated by LLMs. By guiding idea generation with dataset descriptions and se- lecting ideas given automatic validation processes, LLMs are able to propose ideas that are more feasi- ble and more likely to be effective. Beyond quality improvements, we find that these LLM-generated ideas, along with their validation traces, can serve as valuable inspiration for human researchers. In discussing this work with social sciences researchers, we encounter thoughtful reflections on the value of LLM-generated ideas. Some researchers question whether ideas proposed by LLMs truly matter if they do not originate from human “care” or intention. These conversations raise deeper questions about the nature of research: What distinguishes a good idea from a valuable idea? How could LLM-generated ideas contribute to real-world research in ways that augment human creativity? While we provide a preliminary case study of such use in §5, these questions remain open and worth future exploration. Limitations Task Scope While our experiments focus on top- ics related to climate negotiations, the proposed method could be applied to other quantitative so- cial science research areas. We believe that incor- porating data could also enhance the generation of research ideas in other domains, such as computer science, but this would need further development of the method. Exploration of LLMs and Validation Methods Due to the high cost of human evaluation, our ex- periments focus on a single LLM and a specific automatic validation method. Future studies could systematically evaluate how different models and validation methods impact idea quality. Trade-off between Novelty and Feasibility The introduction of metadata improves feasibility but leads to a modest decline in novelty. This suggests that although LLMs are not explicitly restricted to the provided data, the metadata implicitly narrows their scope of imagination. Future works could broaden the data scope from existing data to data that can be collected, or better integrate literature with data to maintain a balance between creativity and feasibility. Ethical Considerations Research ideas generated by LLMs may reflect biases present in their training data and could unin- tentionally resemble existing work without proper citation. Therefore, these ideas should not be adopted for practical use without thorough vali- dation. Furthermore, any use of LLM-generated ideas should be disclosed transparently to ensure ethical integrity. References Anthropic. 2024. Introducing claude 3.5 son- net. https://www.anthropic.com/news/ claude-3-5-sonnet . Jinheon Baek, Sujay Kumar Jauhar, Silviu Cucerzan, and Sung Ju Hwang. 2024. Researchagent: Iter- ative research idea generation over scientific liter- ature with large
|
https://arxiv.org/abs/2505.21396v1
|
language models. arXiv preprint arXiv:2404.07738 .Benjamin E Bagozzi. 2015. The multifaceted nature of global climate change negotiations. The Review of International Organizations , 10:439–464. Daria Blinova, Rakesh Emuru, and Benjamin E Bagozzi. 2024. Individual attendance data for over 30 years of international climate change talks. Scientific Data , 11(1):1134. Tobias Böhmelt. 2013. A closer look at the informa- tion provision rationale: Civil society participation in states’ delegations at the unfccc. The Review of International Organizations , 8(1):55–80. Paula Castro and Marlene Kammerer. 2021. The institu- tionalization of a cleavage: how differential treatment affects state behavior in the climate negotiations. In- ternational studies quarterly , 65(3):683–698. Chen Chen, Ian Noble, Jessica Hellmann, Joyce Coffee, Martin Murillo, and Nitesh Chawla. 2015. University of notre dame global adaptation index. University of Notre Dame . Assaf Elovic. 2023. gpt-researcher. https://github. com/assafelovic/gpt-researcher . Federica Genovese. 2019. Sectors, pollution, and trade: How industrial interests shape domestic positions on global climate agreements. International Studies Quarterly , 63(4):819–836. Federica Genovese, Richard J McAlexander, and Jo- hannes Urpelainen. 2023. Institutional roots of in- ternational alliances: Party groupings and position similarity at global climate negotiations. The Review of International Organizations , 18(2):329–359. Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Os- trow, Akila Welihinda, Alan Hayes, Alec Radford, et al. 2024. Gpt-4o system card. ArXiv preprint , abs/2410.21276. Peter Jansen, Oyvind Tafjord, Marissa Radensky, Pao Siangliulue, Tom Hope, Bhavana Dalvi Mishra, Bod- hisattwa Prasad Majumder, Daniel S Weld, and Pe- ter Clark. 2025. Codescientist: End-to-end semi- automated scientific discovery with code-based ex- perimentation. arXiv preprint arXiv:2503.22708 . Ayse Kaya and Lynne Steuerle Schofield. 2020. Which countries send more delegates to climate change con- ferences? analysis of unfccc cops, 1995–2015. For- eign Policy Analysis , 16(3):478–491. Gary King, Robert O Keohane, and Sidney Verba. 1994. Designing Social Inquiry: Scientific Inference in Qualitative Research . Princeton University Press. Long Li, Weiwen Xu, Jiayan Guo, Ruochen Zhao, Xingxuan Li, Yuqian Yuan, Boqiang Zhang, Yum- ing Jiang, Yifei Xin, Ronghao Dang, et al. 2024a. Chain of ideas: Revolutionizing research via novel idea development with llm agents. arXiv preprint arXiv:2410.13185 . Ruochen Li, Teerth Patel, Qingyun Wang, and Xinya Du. 2024b. Mlr-copilot: Autonomous machine learn- ing research based on large language models agents. arXiv preprint arXiv:2408.14033 . Haokun Liu, Yangqiaoyu Zhou, Mingxuan Li, Chenfei Yuan, and Chenhao Tan. 2024a. Literature meets data: A synergistic approach to hypothesis genera- tion. arXiv preprint arXiv:2410.17309 . Xiao Liu, Zirui Wu, Xueqing Wu, Pan Lu, Kai-Wei Chang, and Yansong Feng. 2024b. Are llms capable of data-based statistical and causal reasoning? bench- marking advanced quantitative reasoning with data. InFindings of the Association for Computational Linguistics ACL 2024 , pages 9215–9235. Yujie Liu, Zonglin Yang, Tong Xie, Jinjie Ni, Ben Gao, Yuqiang Li, Shixiang Tang, Wanli Ouyang, Erik Cambria, and Dongzhan Zhou. 2025. Research- bench: Benchmarking llms in scientific discovery via inspiration-based task decomposition. arXiv preprint arXiv:2503.21248 . Chris Lu, Cong Lu, Robert Tjarko Lange, Jakob Foer- ster, Jeff Clune, and David Ha. 2024. The ai scientist: Towards fully automated open-ended scientific dis- covery. arXiv preprint arXiv:2408.06292
|
https://arxiv.org/abs/2505.21396v1
|
. Bodhisattwa Prasad Majumder, Harshit Surana, Dhruv Agarwal, Sanchaita Hazra, Ashish Sabharwal, and Peter Clark. 2024a. Data-driven discovery with large generative models. arXiv preprint arXiv:2402.13610 . Bodhisattwa Prasad Majumder, Harshit Surana, Dhruv Agarwal, Bhavana Dalvi Mishra, Abhijeetsingh Meena, Aryan Prakhar, Tirth V ora, Tushar Khot, Ashish Sabharwal, and Peter Clark. 2024b. Discov- erybench: Towards data-driven discovery with large language models. arXiv preprint arXiv:2407.01725 . Monty G Marshall, Ted Robert Gurr, and Keith Jaggers. 2014. Polity iv project: Political regime characteris- tics and transitions, 1800–2013. Center for Systemic Peace , 5. Leanne C Powner. 2014. Empirical research and writ- ing: A political science student’s practical guide . CQ Press. Linlu Qiu, Liwei Jiang, Ximing Lu, Melanie Sclar, Valentina Pyatkin, Chandra Bhagavatula, Bailin Wang, Yoon Kim, Yejin Choi, Nouha Dziri, et al. 2024. Phenomenal yet puzzling: Testing inductive reasoning capabilities of language models with hy- pothesis refinement. In The Twelfth International Conference on Learning Representations . Chenglei Si, Diyi Yang, and Tatsunori Hashimoto. 2024. Can llms generate novel research ideas? a large- scale human study with 100+ nlp researchers. arXiv preprint arXiv:2409.04109 .Gemini Team, Petko Georgiev, Ving Ian Lei, Ryan Burnell, Libin Bai, Anmol Gulati, Garrett Tanzer, Damien Vincent, Zhufeng Pan, Shibo Wang, et al. 2024. Gemini 1.5: Unlocking multimodal under- standing across millions of tokens of context. ArXiv preprint , abs/2403.05530. Vegard Tørstad, Håkon Sælen, and Live Standal Bøyum. 2020. The domestic politics of international climate commitments: which factors explain cross-country variation in ndc ambition? Environmental Research Letters , 15(2):024021. Qingyun Wang, Doug Downey, Heng Ji, and Tom Hope. 2024. Scimon: Scientific inspiration machines opti- mized for novelty. In Proceedings of the 62nd Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers) , pages 279–299. Torsten Welle and Joern Birkmann. 2015. The world risk index–an approach to assess risk and vulnera- bility on a global scale. Journal of Extreme Events , 2(01):1550003. Sarah Judith Wright, Anne Sietsma, Stefanie Korswa- gen, Ioannis N Athanasiadis, and Robbert Biesbroek. 2023. How do countries frame climate change? a global comparison of adaptation and mitigation in unfccc national communications. Regional Environ- mental Change , 23(4):129. Yutaro Yamada, Robert Tjarko Lange, Cong Lu, Shen- gran Hu, Chris Lu, Jakob Foerster, Jeff Clune, and David Ha. 2025. The ai scientist-v2: Workshop-level automated scientific discovery via agentic tree search. arXiv preprint arXiv:2504.08066 . Zonglin Yang, Xinya Du, Junxian Li, Jie Zheng, Sou- janya Poria, and Erik Cambria. 2024a. Large lan- guage models for automated open-domain scientific hypotheses discovery. In Findings of the Associa- tion for Computational Linguistics ACL 2024 , pages 13545–13565. Zonglin Yang, Wanhao Liu, Ben Gao, Tong Xie, Yuqiang Li, Wanli Ouyang, Soujanya Poria, Erik Cambria, and Dongzhan Zhou. 2024b. Moose- chem: Large language models for rediscovering un- seen chemistry scientific hypotheses. arXiv preprint arXiv:2410.07076 . Ruiqi Zhong, Peter Zhang, Steve Li, Jinwoo Ahn, Dan Klein, and Jacob Steinhardt. 2023. Goal driven dis- covery of distributional differences via language de- scriptions. Advances in Neural Information Process- ing Systems , 36:40204–40237. Yangqiaoyu Zhou, Haokun Liu, Tejes Srivastava, Hongyuan Mei, and Chenhao Tan. 2024. Hypoth- esis generation
|
https://arxiv.org/abs/2505.21396v1
|
with large language models. arXiv preprint arXiv:2404.04326 . A Data Collection Table 6 presents the full list of datasets in CLI- MATE DATABANK and corresponding data descrip- tions. Table 7 demonstrates the climate negotiation papers we collect for the automatic validation ex- periments. National communications, highlevel statements, and business statements are collected from the UN- FCCC website‡, which allows free download and copy. Earth negotiation bulletins are collected from the ENB website§. Meeting attendance records are from Blinova et al. (2024) under the CC0 1.0 license. Democracy index is from Marshall et al. (2014). World risk index is from Welle and Birk- mann (2015) under the CC BY license. ND-GAIN vulnerability index is from Chen et al. (2015). All other data in CLIMATE DATABANK are from World Bank Open Data under the CC-BY 4.0 license. B Research Topics We generate 10 climate negotiation-related re- search topics using GPT-4o, with the prompt: Could you propose 10 research topics related to cli- mate negotiation? The topics should be important for social science researchers, like in the commu- nity of political science and climate policies. The output should be in JSON format, with the key being the topic name and the value being the explanation. Each topic name should be within five words. The created topics are listed in Table 8, and their quality has been reviewed and verified by human experts. All the research topics and generated ideas throughout this paper are in English. C Research Idea Generation Methods We experiment with three prevalent research idea generation methods in §3: •AI-Researcher (Si et al., 2024): This method first retrieves papers related to the given re- search topic from Semantic Scholar, uses the retrieved papers to ground idea generation, produces a large number of candidate ideas, and then ranks them to identify the best ones. •GPT-Researcher (Elovic, 2023): This method builds a multi-agent framework consisting of planner, executor, and publisher agents. The planner generates plans, while the executor ‡https://unfccc.int/ §https://enb.iisd.org/gathers relevant information. The publisher aggregates all information and generates the research ideas. •Chain-of-Ideas (Li et al., 2024a): This method enhances the literature search module by orga- nizing relevant literature in a chain structure to effectively mirror the progressive research development. To ensure a fair comparison, each method is uni- formly tasked with generating 50 candidate ideas for each research topic. We then use the same idea selection module to rank and select the top ideas. D Evaluation Criteria The ideas are evaluated according to the following four criteria: •Significance: Whether the research idea is impactful to the researchers and the broader public. •Novelty: Whether the idea contributes fresh insights and perspectives to the existing body of knowledge. •Feasibility: Whether the study can be done with available resources, time, and technol- ogy, typically within a one-year scope for a political science PhD student. •Expected Effectiveness: How likely the pro- posed idea will successfully achieve its in- tended outcomes, i.e., how likely the theory will be supported by empirical evidence. A more detailed version of the criteria is shown in Table 9. This is provided to LLMs
|
https://arxiv.org/abs/2505.21396v1
|
during idea selection and automatic evaluation, as well as to human annotators for reference. E Implementation Details For the research idea generation methods, we ad- here to their original hyperparameters but modify the idea generation prompts to include instructions related to idea formats, and add the metadata. Since in social science research, policy implications are frequently invoked to demonstrate a study’s broader relevance and impact, we also ask LLMs to explain the policy implications of generated ideas in the idea generation step. Note that this is only for self- awareness and is excluded from subsequent idea selection and evaluation. Name Description Textual Data National communications National communications submitted by countries every four years (Annex I Parties) or eight years (Non-Annex I Parties), outlining their efforts to address climate change. Highlevel statements Highlevel climate change conference speeches, covering the formal statements made by country-representatives at COPs (2010-2023). Earth negotiation bulletins Reports summarizing the negotiation process and main outputs of UNFCCC meetings, including both daily reports and summary reports (1995-2024). Business statements UNFCCC statements of business associations in the span of eight years (2007-2014). Panel Data Meeting attendance records Attendee records from all UNFCCC COP meetings (1995-2023), including their delegation, job, gender, and so on. GDP The sum of gross value added by all resident producers in the economy plus any product taxes and minus any subsidies not included in the value of the products (in US$). GDP per capita Gross domestic product divided by midyear population (in US$). Population The population of the country, which counts all residents regardless of legal status or citizenship. Foreign direct investment Direct investment equity flows in the reporting economy, which is the sum of equity capital, reinvestment of earnings, and other capital (in US$). Life expectancy at birth The number of years a newborn infant would live if prevailing patterns of mortality at the time of its birth were to stay the same throughout its life. Gender parity index The ratio of girls to boys enrolled at primary and secondary levels in public and private schools. CO2 emissions per capita Carbon dioxide (CO2) emissions excluding LULUCF per capita. Forest area Land under natural or planted stands of trees of at least 5 meters in situ (sq. km). Natural resources rent Rents from coal, oil and natural gas production (% of GDP). Trade openness index Sum of exports and imports of goods and services, divided by gross domestic product, expressed as a percentage. Democracy index The country’s level of democracy, ranging from -10 to 10 (fully democratic). World risk index Higher scores indicate higher vulnerability to climate change. ND-GAIN vulnerability index Higher scores indicate higher vulnerability to climate change. Cross-Sectional Data (in 2025) Member of AOSIS Whether the country is a member of the Alliance of Small Island States (AOSIS). Member of OPEC Whether the country is a member of the Organization of the Petroleum Exporting Countries (OPEC). Member of G20 Whether the country is a member of G20. Annex I country Whether the country is an Annex I country. Table 6: List of datasets and the corresponding
|
https://arxiv.org/abs/2505.21396v1
|
data descriptions in C LIMATE DATABANK. ID Paper Title 1 The Multifaceted Nature of Global Climate Change Negotiations (Bagozzi, 2015) 2 A Closer Look at the Information Provision Rationale: Civil Society Participation in States’ Delegations at the UNFCCC (Böhmelt, 2013) 3 Sectors, Pollution, and Trade: How Industrial Interests Shape Domestic Positions on Global Climate Agreements (Gen- ovese, 2019) 4 The domestic politics of international climate commitments: which factors explain cross-country variation in NDC ambition? (Tørstad et al., 2020) 5 Which Countries Send More Delegates to Climate Change Conferences? Analysis of UNFCCC COPs, 1995–2015 (Kaya and Schofield, 2020) 6 The Institutionalization of a Cleavage: How Differential Treatment Affects State Behavior in the Climate Negotia- tions (Castro and Kammerer, 2021) 7 How Do Countries Frame Climate Change? A Global Comparison of Adaptation and Mitigation in UNFCCC National Communications (Wright et al., 2023) 8 Institutional Roots of International Alliances: Party Groupings and Position Similarity at Global Climate Negotia- tions (Genovese et al., 2023) Table 7: List of reference papers used in automatic validation experiments. For idea selection in §3, we follow AI- Researcher’s tournament ranking method but adapt it by having the model rank idea pairs based on thefour evaluation aspects separately. The idea that wins in more aspects is considered the winner, and a tie occurs if the two ideas win an equal number Topic Description Adaptation vs. Mitigation FocusStudy the negotiation dynamics and policy priorities between adaptation and mit- igation efforts, and the factors influencing their prominence in different countries’ strategies. Climate Finance PoliticsExamine the political challenges and negotiations around climate finance, including funding commitments, allocation mechanisms, and equity in financial support for adaptation and mitigation. Climate Justice and EquityInvestigate how principles of justice and equity are integrated into climate negotiations and their impacts on policy outcomes for different countries and communities. Compliance and Monitoring Mecha- nismsFocus on the systems in place for ensuring adherence to international climate agree- ments, and the effectiveness of these mechanisms in promoting accountability. Impacts of Domestic PoliciesExplore how domestic climate policies of influential nations affect their negotiation positions and the overall dynamics in international climate agreements. Historical Responsibility DebatesAnalyze discussions around historical responsibility for climate change and how these debates shape fairness principles and burden-sharing in negotiations. Negotiation Strategies and TacticsAnalyze the negotiation strategies employed by countries or blocs in climate negotia- tions, including coalition-building, bargaining tactics, and compromise-making. Role of Non-State ActorsStudy the influence and participation of non-state actors, such as NGOs, private sector, and indigenous groups, in shaping climate negotiation agendas and outcomes. Power Dynamics and InfluenceExamine the roles of different countries, especially major emitters versus vulnerable states, and their influence in shaping international climate agreements and commit- ments. Technology Transfer and Collabora- tionExplore the negotiations related to technology transfer, the barriers to effective collaboration, and how they impact developing countries’ abilities to meet climate goals. Table 8: Climate negotiation research topics used in this paper. of aspects. The temperature is set to 0 for all steps after idea generation. The maximum number of output tokens is set to 1024 for the feasibility check, idea
|
https://arxiv.org/abs/2505.21396v1
|
selection, and automatic evaluation. Experiments are conducted on 8 NVIDIA A800 GPUs. F Experimental Results Table 10 presents the ELO scores for research idea generation methods with and without metadata, serving as the tabular counterpart to Figure 3. G Case Study Table 11 presents an example of ideas generated by GPT-Researcher under the same topic. Idea 1, gen- erated without metadata, contains undefined terms such as inclusivity and high-quality data, while Idea 2, which is guided by metadata, introduces clear and measurable hypotheses. The integration of metadata makes the research idea more action- able, increasing the likelihood of meaningful find- ings and improving overall quality. Table 12 showcases an example of the automatic validation process. Based on an idea generated by GPT-Researcher under the topic Role of Non-State Actors , the LLM first conducts a feasibility check and selects three datasets from the CLIMATE DATA- BANK. It then performs hypothesis validation andsummarizes the validation process in natural lan- guage. H Prompts Table 13 presents the prompt for idea generation using AI-Researcher. The same instructions regard- ing idea format, example ideas, and metadata are provided to GPT-Researcher and Chain-of-Ideas. The example ideas are drawn from existing aca- demic papers. Table 14 shows the prompt used for both idea selection and automatic evaluation in §3. The dif- ference is that idea selection is conducted by the LLM for idea generation, whereas in automatic evaluation, other LLMs are used to reduce bias. Tables 15 through 17 display the prompts for the automatic validation process, including feasibility checks, hypothesis validation, and validation pro- cess summarization. Table 18 outlines the prompt for idea selection in §4, which differs from Table 14 by incorporating the validation process. I Annotation Details Figures 5 and 6 show the annotation interfaces for human evaluations of idea pairs (in §3 - §5) and hypothesis validation processes (in §4.1), respec- tively. All annotators are fairly paid with more than $10 per hour. Significance 1. Impact on the Field: - Does the research have the potential to influence future work in the field significantly? - Will it change the way scholars and practitioners think about a particular issue or problem? 2. Relevance to Current Problems: - Does the research tackle urgent or pressing issues faced by society today? - How does it contribute to solving real-world problems or advancing public policy? 3. Advancement of Theoretical or Practical Understanding: - Does it deepen our theoretical insights or provide new frameworks for understanding? - Can the findings be translated into practical applications or technologies that benefit society? Novelty 1. Originality: - Is the research question unique or a significant departure from existing studies? - Does the theory offer a new perspective or challenge prevailing paradigms? 2. Innovation in Approach: - Are there novel methodologies or analytical techniques proposed? - Does it introduce new datasets or sources of evidence? 3. Contribution to Knowledge: - Does the idea fill a significant gap in the literature? - How does it expand or refine existing theories or models? Feasibility 1. Resource Availability: - Can the necessary data or materials be
|
https://arxiv.org/abs/2505.21396v1
|
accessed or acquired with reasonable effort? - Are funding, human resources, and technical support sufficient? 2. Timeline Appropriateness: - Can the study be realistically completed within one year? - Does the research have clear stages with achievable milestones? 3. Technical and Methodological Soundness: - Are the proposed methodologies practical and well-founded? Expected Effectiveness 1. Theoretical Rigor: - Is the theory logically sound with well-defined constructs and relationships? - How well are the hypotheses grounded in existing literature and theory? 2. Empirical Evidence Potential: - How robust is the potential for empirical evidence to support the theory? - Are the proposed indicators measurable and likely to yield clear data? Table 9: Detailed criteria for evaluating ideas. In the human study of §5, participants are given 20 minutes to propose one research idea for each research topic they select. The experiment interface for this task is shown in Figure 7. Method w. Metadata Significance Novelty Feasibility Exp. Effectiveness Average Gemini-1.5-Pro as the Judge AI-Researcher✗ 902 933 1047 951 958 ✓ 938 931 1098 997 991 GPT-Researcher✗ 1019 1000 1045 1015 1020 ✓ 1073 1021 1183 1134 1103 Chain-of-Ideas✗ 974 1025 822 915 934 ✓ 1094 1091 805 988 995 Claude-3.5-Sonnet as the Judge AI-Researcher✗ 870 968 1060 881 945 ✓ 855 859 1152 918 946 GPT-Researcher✗ 931 972 1076 928 977 ✓ 1000 903 1228 1085 1054 Chain-of-Ideas✗ 1066 1118 768 1012 991 ✓ 1278 1180 716 1176 1088 Table 10: ELO scores for research idea generation methods with and without metadata. Better results of each method are in bold. Topic Compliance and Monitoring Mechanisms Idea 1: Generated without Metadata Research QuestionHow does the Global Stocktake process under the Paris Agreement influence collective progress toward climate goals, and what factors enhance its effectiveness? TheoryThe theory posits that the Global Stocktake serves as a critical feedback mechanism that fosters collective action by identifying gaps, sharing best practices, and increasing ambition. Its effectiveness depends on the inclusivity of the process, the quality of data used, and the willingness of states to act on its findings. The theory also suggests that the Global Stocktake is more effective when it incorporates inputs from non-party stakeholders and aligns with domestic political priorities. Hypotheses1. The inclusivity of the Global Stocktake process (e.g., participation of non-party stakeholders) positively correlates with its effectiveness in fostering collective action. 2. High-quality data used in the Global Stocktake increases its impact on state ambition and compli- ance. 3. States that align their domestic policies with Global Stocktake findings are more likely to achieve their climate goals. 4. The Global Stocktake is less effective in fostering collective action when major emitters (e.g., the US, China) fail to participate meaningfully. Idea 2: Generated with Metadata Research QuestionHow do domestic political institutions influence the effectiveness of compliance monitoring in international climate negotiations? TheoryDomestic political institutions play a critical role in shaping a country’s commitment and ability to comply with international climate agreements. Democratic institutions, with their emphasis on transparency and accountability, are theorized to facilitate better compliance monitoring. Conversely, autocratic regimes may prioritize short-term economic gains over long-term
|
https://arxiv.org/abs/2505.21396v1
|
climate commitments, leading to weaker compliance. Additionally, institutional mechanisms such as independent regulatory agencies, judicial oversight, and civil society engagement may enhance the credibility and accuracy of compliance monitoring. Hypotheses1. Countries with higher Democracy Index scores are more likely to submit accurate and timely compliance reports to international climate bodies. 2. The presence of independent regulatory agencies positively correlates with the quality of compliance monitoring in climate negotiations. 3. Civil society engagement, as measured by the number of environmental NGOs per capita, enhances the accuracy of compliance reporting. 4. Autocratic regimes are more likely to underreport their emissions compared to democratic regimes, controlling for economic and environmental factors. Table 11: Examples of ideas generated by GPT-Researcher with and without metadata. Topic Role of Non-State Actors Idea Generated Research QuestionHow do non-state actors influence the ambition levels of national climate commitments under the Paris Agreement? TheoryNon-state actors (NSAs), such as businesses, civil society organizations (CSOs), and research in- stitutions, play a critical role in driving climate ambition by pressuring governments to adopt more stringent climate policies. This influence stems from their ability to mobilize public opinion, pro- vide technical expertise, and create accountability mechanisms. The theory posits that NSAs are particularly effective in democracies, where governments are more responsive to public pressure, and in countries with high trade openness, where businesses are incentivized to align with international climate norms to maintain competitiveness. Hypotheses1. Countries with higher levels of NSA participation in UNFCCC meetings will exhibit greater increases in the ambition of their Nationally Determined Contributions (NDCs) over time. 2. The impact of NSA participation on NDC ambition will be stronger in democracies compared to autocracies. 3. Trade openness moderates the relationship between NSA participation and NDC ambition, with more open economies showing a stronger positive relationship. Feasibility Check Feasibility Yes Data Selected 1 (national communications), 5 (meeting attendance records), 16 (democracy index) Hypothesis Validation (raw traces omitted due to space limit) Validation Process Summarization 1. The datasets were successfully loaded, containing information on national communications, attendance at UNFCCC meetings, and democracy index scores. The approach involves testing three hypotheses: 1) NSA attendance correlates with increased NDC ambition, 2) this relationship differs between democratic and autocratic countries, and 3) trade openness influences the relationship between NSA participation and NDC ambition. 2. NSA participation was quantified by counting attendees per country and year. Ambition levels in national communica- tions were inferred by counting occurrences of keywords like ’increase’, ’commitment’, and ’target’. 3. Ambition scores were calculated based on keyword occurrences in national communications. The data was integrated with NSA participation and democracy index data, aligning them by country and year. 4. The integrated dataset was prepared, containing NSA participation, ambition scores, and democracy index scores. This dataset is ready for hypothesis testing. 5. Hypothesis 1 was tested, showing a statistically significant positive correlation between NSA participation and NDC ambition scores, supporting the hypothesis. 6. Hypothesis 1 is supported, indicating a positive correlation between NSA participation and NDC ambition. 7. Hypothesis 2 was tested by segmenting data into democracies and autocracies based on democracy index scores.
|
https://arxiv.org/abs/2505.21396v1
|
The relationship was not significantly stronger in democracies, so the hypothesis is not supported. 8. Hypothesis 2 is not supported, as the relationship between NSA participation and NDC ambition is not significantly stronger in democracies compared to autocracies. 9. Hypothesis 3 could not be tested due to the absence of trade openness data. The final results are: Hypothesis 1 is supported, Hypothesis 2 is not supported, and Hypothesis 3 needs more data. Table 12: Examples of the automatic validation process. Prompt for Idea Generation You are an expert researcher in political science. Now I want you to help me brainstorm some new research ideas on the topic of {research topic}. Here are some relevant papers on this topic just for your background knowledge: {titles and abstracts of related literature} The above papers are only for inspiration and you should not cite them and just make some incremental modifications. Instead, you should make sure your ideas are novel and distinct from the prior literature. Here are existing data related to this topic: Textual data: 1. National communications: National communications submitted by countries every four years (Annex I Parties) or eight years (Non-Annex I Parties), outlining their efforts to address climate change. 2. Highlevel statements: ...[omitted]... Panel data: 5. Meeting attendance records: ...[omitted]... Cross-sectional data: 19. Member of AOSIS: ...[omitted]... You should generate {number of ideas to generate} different ideas on this topic. Try to be creative and diverse in idea generation, and do not repeat any similar ideas. You should aim for research that can be published in top political science journals. Good research should contribute to theoretical value and/or policy implications. Each idea should be described as: (1) Research Question: Clearly propose a research question, which should be closely related to the topic. Research questions can delve into issues of what, why, how, when, and so forth. Interesting research questions are those that intellectually appeal to political scientists, address concerns of a broad population and decision makers, and where the answers are not obvious. (2) Theory: Develop a theory that reasonably speculates on the answer to the research question, including a statement about why the proposed answer is correct. A theory is a system of concepts and relationships between those concepts, that collectively presents a logical, systematic, and coherent explanation of a phenomenon of interest. (3) Hypotheses: Propose 1-5 hypotheses derived from the theory. The hypotheses identify observable implications of the theory, i.e., things we would observe if the theory is correct, and make predictions about relationships between measurable indicators of the theory’s concepts. (4) Policy Implication: Explain how the research could help policymakers to adjust their decisions, or implement policy more effectively or justly. Here are examples of research ideas on other topics. {content of two example ideas} You should make sure to come up with your own novel and different ideas for the specified topic: {research topic}. You should make each idea standalone and not dependent on the other ideas. You should avoid repeating generating ideas with the following existing research questions, and try to be different and diverse: {existing
|
https://arxiv.org/abs/2505.21396v1
|
ideas generated} Please write down your {number of ideas to generate} ideas. Output the ideas in json format as a dictionary, where the key is ’ideas’, and the value is a list of ideas. Each idea has keys ’Research Question’, ’Theory’, ’Hypotheses’, and ’Policy Implication’. The value of ’Hypotheses’ is a list of strings, and the value of other keys is a string. Table 13: Example prompt for idea generation with AI-Researcher. The same idea format instructions, example ideas, and metadata are also provided to GPT-Researcher and Chain-of-Ideas. Prompt for Both Idea Selection and Automatic Evaluation in §3 You are an expert researcher in political science. You are given two research ideas related to the topic {research topic}. Your task is to identify which idea is better from the following four dimensions ’Significance’, ’Novelty’, ’Feasibility’, and ’Expected Effectiveness’. Each research idea comprises the following three parts. Research Question: A specific question about a behavior, event, or phenomenon of interest that the researcher wishes to seek answers for in the research. Theory: Reasonably speculate on the answer to the research question, including a statement about why the proposed answer is correct. Hypotheses: Identify observable implications of the theory, i.e., things we would observe if the theory is correct, and make predictions about relationships between measurable indicators of the theory’s concepts. Evaluation Criteria: {detailed content of the evaluation criteria} Note: Please make your decision based on the weighted assessment of sub-criteria to avoid subjective bias. Avoid any position biases and ensure that the order of the two ideas does not influence your decision. DO NOT allow the LENGTH of the ideas to influence your evaluation. Be as objective as possible. Here are the two research ideas for you to assess: Idea 1: {content of idea 1} Idea 2: {content of idea 2} Please provide an explanation supporting your assessment. At the last line of your response, format your assessment in JSON with the keys: ’Significance’, ’Novelty’, ’Feasibility’, and ’Expected Effectiveness’. The value of each key is an integer ranging from 0 to 2. 0 means a tie, 1 means idea 1 is better, and 2 means idea 2 is better. Table 14: Example prompt for both idea selection and automatic evaluation in §3. Prompt for Feasibility Check You are an expert researcher in political science. Given a research idea with the components of ’Research Question’, ’Theory’, ’Hypotheses’, along with descriptions of existing data, please determine the feasibility of validating the hypotheses using the provided data. Here is the research idea: {content of the idea} Here is the existing data: {content of the metadata in C LIMATE DATABANK } Your task is as follows: 1. Feasibility Assessment: - Evaluate whether it is possible to validate the hypotheses with the given data. - If feasible, provide a validation plan and specify the data that will be used by their numbers. A hypothesis is considered feasible to validate if the concepts in the hypothesis can be measured with existing data. - If not feasible, output ’Feasibility’ as ’No’. Note that the theory provides an answer and explanation
|
https://arxiv.org/abs/2505.21396v1
|
to the research question, and the hypotheses identify observable implications of the theory. 2. Output Requirements: - Format your response in JSON with the keys: ’Feasibility’, ’Validation Plan’, and ’Data Used’. - ’Feasibility’: This can take values from [’Yes’, ’No’]. It indicates whether the hypotheses can be validated with the existing data. - ’Validation Plan’: A string detailing the plan to validate the hypotheses. - ’Data Used’: A list of numbers denoting which data are utilized in the validation process, keep the number of them within 3. As textual data is hard to handle, please only select necessary textual data, and keep the number of them within 1. - If the hypotheses are infeasible to validate, only include ’Feasibility’ in the JSON output. Table 15: Example prompt for feasibility check. Prompt for Hypothesis Validation Please write code to validate the following hypotheses using the provided data. Hypotheses: {hypotheses within the idea} Data: {metadata of datasets selected} The last line of your output should be the final answer, in the JSON format like {’Hypothesis 1’: ’Supported’, ...}. The value for each hypothesis should be ’Supported’ or ’Not supported’. If the evidence for the hypothesis is insignifi- cant/mixed/limited/partial, the hypothesis is also classified as not supported. Table 16: Example prompt for hypothesis validation. Prompt for Validation Process Summarization Here is the validation process of several hypotheses. It contains steps in both text and code formats. For steps in text format, the step contains keys ’type’ and ’content’. For steps in code format, the step contains keys ’type’, ’content’, and ’output’ or ’error’. Please summarize the validation process in natural language, removing unnecessary steps and errors. Only keep the crucial reasoning process and results that lead to the final conclusion. Your output should be a list in json structure. Each item in the list is a dict with keys ’type’ and ’summarization’. The value of ’type’ is ’text’ or ’code’, and the value of ’summarization’ is a string describing the step. Limit the output into 1000 tokens. Original Validation Steps: {raw validation traces} Output: Table 17: Example prompt for validation process summarization. Prompt for Idea Selection in §4 You are an expert researcher in political science. You are given two research ideas related to the topic {research topic}. Your task is to identify which idea is better from the following four dimensions ’Significance’, ’Novelty’, ’Feasibility’, and ’Expected Effectiveness’. Each research idea comprises the following four parts. Research Question: A specific question about a behavior, event, or phenomenon of interest that the researcher wishes to seek answers for in the research. Theory: Reasonably speculate on the answer to the research question, including a statement about why the proposed answer is correct. Hypotheses: Identify observable implications of the theory, i.e., things we would observe if the theory is correct, and make predictions about relationships between measurable indicators of the theory’s concepts. Preliminary Validation: Summarization of the preliminary validation process of the hypotheses. Evaluation Criteria: {detailed content of the evaluation criteria} Note: Please make your decision based on the weighted assessment of sub-criteria to avoid subjective bias. Avoid
|
https://arxiv.org/abs/2505.21396v1
|
any position biases and ensure that the order of the two ideas does not influence your decision. DO NOT allow the LENGTH of the ideas to influence your evaluation. Be as objective as possible. Here are the two research ideas for you to assess: Idea 1: {content of idea 1, containing the summarized validation process } Idea 2: {content of idea 2, containing the summarized validation process } Please provide an explanation supporting your assessment. At the last line of your response, format your assessment in JSON with the keys: ’Significance’, ’Novelty’, ’Feasibility’, and ’Expected Effectiveness’. The value of each key is an integer ranging from 0 to 2. 0 means a tie, 1 means idea 1 is better, and 2 means idea 2 is better. Table 18: Example prompt for idea selection in §4, which differs from Table 14 in adding the validation results. Figure 5: Annotation interface for human evaluation of idea pairs. Figure 6: Annotation interface for human evaluation of hypothesis validation processes. Figure 7: Experiment interface for the human study of proposing research ideas.
|
https://arxiv.org/abs/2505.21396v1
|
arXiv:2505.21398v1 [cs.AI] 27 May 2025A Structured Unplugged Approach for Foundational AI Literacy in Primary Education Maria Cristina Carrisi1[0000−0002−2837−3971], Mirko Marras1[0000−0003−1989−6057], and Sara Vergallo2[0009−0006−2129−5583] 1University of Cagliari, Cagliari, Italy {mariacri.carrisi,mirko.marras}@unica.it 2University of Macerata, Macerata, Italy s.vergallo@unimc.it Abstract. Younger generations are growing up in a world increasingly shaped by intelligent technologies, making early AI literacy crucial for developing the skills to critically understand and navigate them. How- ever, education in this field often emphasizes tool-based learning, pri- oritizing usage over understanding the underlying concepts. This lack of knowledge leaves non-experts, especially children, prone to miscon- ceptions, unrealistic expectations, and difficulties in recognizing biases and stereotypes. In this paper, we propose a structured and replicable teaching approach that fosters foundational AI literacy in primary stu- dents, by building upon core mathematical elements closely connected to and of interest in primary curricula, to strengthen conceptualization, data representation, classification reasoning, and evaluation of AI. To as- sess the effectiveness of our approach, we conducted an empirical study with thirty-one fifth-grade students across two classes, evaluating their progress through a post-test and a satisfaction survey. Our results indi- cate improvements in terminology understanding and usage, features de- scription,logicalreasoning,andevaluativeskills,withstudentsshowinga deepercomprehensionofdecision-makingprocessesandtheirlimitations. Moreover, the approach proved engaging, with students particularly en- joying activities that linked AI concepts to real-world reasoning. Mate- rials:https://github.com/tail-unica/ai-literacy-primary-ed . Keywords: AI Teaching ·Unplugged Learning ·K-12 Education. 1 Introduction Motivation . Artificial Intelligence (AI) is now embedded in everyday life, shap- ing how individuals interact with technology and process information. Children, in particular, encounter AI-powered systems daily, when using voice assistants, watching recommended videos, or playing adaptive educational games, as exam- ples. However, without a foundational understanding of AI, they risk passively interacting with these technologies, leading to misconceptions, unrealistic ex- pectations, and inability to assess algorithmic outputs [40,41]. For example, a 2 M.C. Carrisi et al. child might trust a chatbot historical explanation without questioning its ac- curacy. Early AI literacy becomes thus crucial for enabling young learners to comprehend how such systems function, be aware that they are limited systems, recognize such limitations, and evaluate their implications. This is crucial to raise responsible citizens of the future who use technology with awareness [36]. AI literacy extends beyond technology use; it requires and reinforces fun- damental mathematical concepts. AI models, for instance, operate on principles similartosorting,helpingstudentsunderstandhowobjectscanbegroupedbased on shared features, like categorizing personal expenses into needs. Likewise, AI systems rely on measures of accuracy to assess their performance, similar to how students would track their progress in physical activities by comparing running times or steps taken each day. AI literacy also strengthens logical progression by encouraging students to follow sequences, recognize patterns, and check their conclusions, as they would do when following a set of instructions for a project. By developing AI literacy, children not only become critical AI users but also build strong problem-solving and analytical skills and enforce math knowledge. Prior Works . AI has been a subject of study since the mid-20th century, with early discussions emphasizing the need to structure its foundational studies. The pedagogical implications of AI education were considered since early 1970s
|
https://arxiv.org/abs/2505.21398v1
|
[25], showing the importance of integrating AI and CS into education from childhood. However, limited computational resources historically slowed down both AI de- velopment and its integration into curricula. In recent years, AI and robotics have renewed interest in early AI education, prompting organizations to estab- lish initiatives, e.g., Digital Education Action Plan [11], Informatics for All[16],and National AI Initiative intheUS[37].Manycountriesarestart- ing to reason on including CS into school curricula, often including AI topics. With this growing interest, the need for structured frameworks has become urgent. For instance, the AI4K12initiative [1] aimed to define "Five Big Ideas" that every K-12 student should understand about AI [34,35]. Numerous tools [8, 13,23,14] have been developed to facilitate AI learning, allowing students to ex- periment with underlying concepts. While offering an accessible entry point into AI, this tools often function as "black boxes", limiting students’ understand- ing of the core mechanisms [38]. This raises concerns about whether current AI education fosters true comprehension or just reinforces procedural familiarity. Open Issues . Studies indicate that exposure to AI tools alone does not neces- sarily improve conceptual understanding. For instance, using AI-based tools and robots was found not to enhance students’ CS competence more than other ac- tivities, nor their awareness of AI’s functioning [5]. Similarly, a recent survey [29] shows a lack of structured learning paths for AI, with the existing ones often fo- cused on tool usage rather than concepts. While unplugged activities have been proposed to address this gap [20,21,30], they mainly target older students. Therefore, there remains a lack of well-structured curricular proposals for primary school students, particularly on foundational AI concepts. A key chal- lenge for non-expert learners in CS education is the underlying mathematical difficulty [33], which becomes pronounced in data-related topics such as AI. De- Foundational AI Literacy in Primary Education 3 ficienciesinclassification(sets)anddatarepresentation(trees,tables)[15]hinder students’ ability to engage with AI concepts, though these skills are introduced in primary school. However, they are often insufficiently emphasized in early education [15]. To our knowledge, no learning path systematically integrates AI education with mathematical skill development for primary schools. Contributions . In this paper, we explore how primary education can effectively integrate foundational AI concepts with mathematics to enhance students’ con- ceptual understanding and engagement. Specifically, we investigate how a struc- tured learning path can bridge AI and mathematical reasoning by emphasizing, for instance, classification, set theory, and data representation. To this end, we designed an unplugged, hands-on curriculum that introduces the theoretical and mathematical foundations of AI through interactive and problem-solving activi- ties. Our research focuses on assessing the impact of this approach on students’ AI comprehension ( RQ1), its role in strengthening mathematical skills ( RQ2), and its effectiveness in fostering engagement and interest ( RQ3). To address these research questions, we make three key contributions. First, we present the design of a novel learning path that systematically integrates AI principles with mathematical concepts, ensuring alignment with primary school curricula. Second, we detail the implementation of this learning path, including the selection and preparation of instructional materials, which were carefully curated to
|
https://arxiv.org/abs/2505.21398v1
|
support students’ cognitive development in both AI and mathemat- ics. Third, we evaluate the effectiveness of the learning path through a study involving two primary school classes, analyzing both quantitative performance and qualitative feedback to assess conceptual gains and engagement. 2 The Proposed Learning Path In this section, we present the design and implementation of a structured learn- ing path aimed at introducing primary school students to key AI concepts while integrating foundational mathematical reasoning. Our approach is designed to provideacoherentandprogressiveexperience.Thepathisstructuredtoreinforce prior knowledge, guide students in identifying system limitations, and introduce new representation models to support cognitive development. We ground in con- structivism and constructionism [26,25], and adopt a spiral learning approach [4], ensuring that concepts are reintroduced at increasing levels of complexity. To implement this path, we combine original instructional materials with adapted resources from established educational frameworks. The learning mod- ules are structured following learning-by-doing [10] and learning-by-necessity [31] methodologies. These strategies encourage students to actively experiment, refine their knowledge, and engage in iterative problem-solving, with targeted teacher interventions whenever prior approaches proved insufficient. Addition- ally, semiotic representation in reasoning [27] facilitates students’ understanding of classification concepts via multiple representational models, e.g., Euler-Venn diagrams, tabular data structures, and decision trees. Topics and themes are 4 M.C. Carrisi et al. chosen according to four out of the "Five Big Ideas" [1,34], namely perception, representation and reasoning, learning, and societal impact3. 2.1 Module 1: Introduction to AI (2 hours) The first module establishes foundations of CS and AI, providing students with the knowledge to comprehend how AI uses data. An ice breaking questionnaire allows to monitor preconception on CS and AI and evaluate initial classification and argumentative abilities. Then, the session aims to dismantle common mis- conceptions, emphasizing AI as a human-engineered tool designed to automate data processing, rather than an autonomous entity possessing intelligence. The lesson begins with an exploration of CS as a discipline, tracing its origins and highlighting its role in developing computational tools for automated infor- mation processing. Some historical illustrate how humans have always strove to optimize their work, helping students to see AI as a continuation of past ef- forts to enhance automation, and sparking discussions on the societal impact of new technologies on the job market. A key distinction was introduced between dataandinformation . Students are introduced to computers as machines that receive, process, store, and output data based on precisely defined human in- structions and not on some intrinsic intelligence. Errors in AI systems originate from human design flaws rather than intrinsic machine faults. To bridge theoretical concepts with real-world applications and to experience different ways of computer perception , students analyze automation systems, including a supermarket checkout system and two automated irrigation systems. The first illustrates how barcode scanners convert product codes into meaning- ful data, enabling automated price retrieval and cost calculation. The second compares handmade and industrial moisture sensors to demonstrate how dif- ferent devices influence data quality. The handmade sensor uses alligator clips and aluminum foil and detects only two states (presence or absence of water), while the industrial sensor
|
https://arxiv.org/abs/2505.21398v1
|
provides specific humidity values. The third system uses humidity and light sensors to determine when watering is necessary, follow- ing a structured set of predefined logical rules. This helps students understand howmultiple data sources enhance perception and how such systems rely on rule-based decision-making . Subsequently, we retain fundamental that stu- dents experience a machine learning based tool. We suggest the first 5 steps of theAI for the Oceans activity [8]. It is designed for autonomous use by chil- dren and is accompanied by explanatory videos that can be skipped to guide personally students across the activities, to emphasize some steps ( trainand test), their ordering and meaning, and to introduce specific terminology with contextualization ( data,labels,model,rule). This hands-on exercise allows also to observe how incorrect training (e.g., mislabeling an apple as a fish) leads to inaccurate recognition. In alignment with a learning-by-doing approach, stu- dents engage in discussions about AI’s limitations, reinforcing the idea that AI 3As CS is not generally taught in primary schools in anonymized country and many others, we excluded natural interaction to prioritize foundational computational thinking over more advanced human-computer interaction concepts. Foundational AI Literacy in Primary Education 5 does not "think" independently but operates within structured input-output relationships, ultimately functioning as an extension of human decision-making. 2.2 Module 2: Classification Principles (2 hours) The second module introduces students to classification, focusing on how AI categorizes objects based on predefined rules extracted by data-driven processes. The session follows a spiral learning approach, reinforcing prior knowledge from the first module while systematically expanding students’ understanding. Studentsengageinaclassificationtask,basedonconstructivistlearningprin- ciples, deriving models by observing object attributes and applying them to clas- sify new entities, such as a fictional Monster family [6]. Initially, students are ex- pected to classify new members based on salient individual features like hairstyle or ear shape [6]. Through guided reflection, they recognize the inconsistencies of single-feature classification and refine their strategies by incorporating multiple distinguishing characteristics, leading to a more structured, frequency-based ap- proach.Afteridentifyingclassificationfeatures,studentsexplorerule-basedmod- els: using a single characteristic as a predictor or a threshold-based approach, where a creature belongs to the Monster family if it exhibits more than half of the defining traits. This leads to a discussion on accuracy evaluation , where students assess their models by comparing predictions with known examples. They explore how different classification rules impact model performance and generalization to unseen data. By testing their models on an unknown dataset, students identify cases where rules fail, introducing the concept of overfitting . The session ends with a reflection on the real-world applications of AI- based classification systems. Rule-based models provide a structured approach to decision-making but often require data-driven refinement. 2.3 Module 3: Classification Representations (2 hours) The third module introduces structured classification, focusing on how models implement rule-based decision-making to categorize objects. Students expand upon classification concepts by means of multiple representational models. The lesson begins with a discussion on classification principles using two classes, poisonous vs. non-poisonous mushrooms as an example of feature-based differentiation. Students examine how specific characteristics, such as cap color, gill structure, and stem shape, can
|
https://arxiv.org/abs/2505.21398v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.