Buckets:

|
download
raw
136 kB

Title: Strategic Linear Contextual Bandits

URL Source: https://arxiv.org/html/2406.00551

Markdown Content: Back to arXiv

This is experimental HTML to improve accessibility. We invite you to report rendering errors. Use Alt+Y to toggle on accessible reporting links and Alt+Shift+Y to toggle off. Learn more about this project and help improve conversions.

Why HTML? Report Issue Back to Abstract Download PDF Abstract 1Introduction 2Related Work 3Strategic Linear Contextual Bandits 4Warm-Up: 𝜽 βˆ— is Known in Advance 5The Optimistic Grim Trigger Mechanism 6Experiments: Simulating Strategic Context Manipulation 7Discussion References License: arXiv.org perpetual non-exclusive license arXiv:2406.00551v2 [cs.LG] 26 Sep 2024 Strategic Linear Contextual Bandits Thomas Kleine Buening The Alan Turing Institute Aadirupa Saha Apple ML Research Christos Dimitrakakis University of Neuchatel Haifeng Xu University of Chicago Abstract

Motivated by the phenomenon of strategic agents gaming a recommender system to maximize the number of times they are recommended to users, we study a strategic variant of the linear contextual bandit problem, where the arms can strategically misreport privately observed contexts to the learner. We treat the algorithm design problem as one of mechanism design under uncertainty and propose the Optimistic Grim Trigger Mechanism (OptGTM) that incentivizes the agents (i.e., arms) to report their contexts truthfully while simultaneously minimizing regret. We also show that failing to account for the strategic nature of the agents results in linear regret. However, a trade-off between mechanism design and regret minimization appears to be unavoidable. More broadly, this work aims to provide insight into the intersection of online learning and mechanism design.

1Introduction

Recommendation algorithms that select the most relevant item for sequentially arriving users or queries have become vital for navigating the internet and its many online platforms. However, recommender systems are susceptible to manipulation by strategic agents seeking to artificially increase their frequency of recommendation [33, 31, 38]. These agents, ranging from sellers on platforms like Amazon to websites aiming for higher visibility in search results, employ tactics such as altering product attributes or engage in aggressive search engine optimization [32, 29]. By gaming the algorithms, agents attempt to appear more relevant than they actually are, often compromising the integrity and intended functionality of the recommender system. Here, the key issue lies in the agents’ incentive to manipulate the learning algorithm to maximize their utility (i.e., profit). To address this challenge, we study and design algorithms in a strategic variant of the linear contextual bandit, where the agents (i.e., arms) have the ability to misreport privately observed contexts to the learner. Our main contribution is connecting online learning with approximate mechanism design to minimize regret while, at the same time, discouraging the arms from gaming our learning algorithm.

The contextual bandit [2, 24] is a generalization of the multi-armed bandit problem to the case where the learner observes relevant contextual information before pulling an arm. It has found application in various domains including healthcare [37] and online recommendation [25]. We here focus on the linearly realizable setting [6, 1], where each arm’s reward is a linear function of the arm’s context in the given round. In the standard linear contextual bandit, at the beginning of round 𝑑 , the learner observes the context π‘₯ 𝑑 , 𝑖 βˆ— of every arm 𝑖 ∈ [ 𝐾 ] , selects an arm 𝑖 𝑑 , and receives a reward drawn from a distribution with mean ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— ⟩ where πœƒ βˆ— is an unknown parameter. In the strategic linear contextual bandit, we assume that each arm is a self-interested agent that wants to maximize the number of times it gets pulled by manipulating its contexts.

More precisely, we consider the situation where each arm 𝑖 privately observes its true context π‘₯ 𝑑 , 𝑖 βˆ— every round, e.g., its relevance to the user arriving in round 𝑑 , but reports a potentially gamed context vector π‘₯ 𝑑 , 𝑖 to the learner. The learner does not observe the true contexts, but only the reported contexts 𝒳 𝑑

{ π‘₯ 𝑑 , 1 , … , π‘₯ 𝑑 , 𝐾 } and chooses an action from this gamed action set 𝒳 𝑑 . When the learner pulls arm 𝑖 𝑑 , the learner then observes a reward π‘Ÿ 𝑑 , 𝑖 𝑑 drawn from a distribution with mean ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— ⟩ . In other words, the arms can manipulate the contexts the learner observes, but cannot influence the underlying reward. This is often the case as superficially changing attributes or meta data has no effect on an item’s true relevance to a user.

In summary, our contributions are:

β€’

We introduce a strategic variant of the linear contextual bandit problem, where each arm, in every round, can misreport its context to the learner to maximize its utility, defined as the total number of times the learner selects the arm over 𝑇 rounds (Section 3). We demonstrate that incentive-unaware algorithms, which do not explicitly consider the incentives they (implicitly) create for the arms, suffer linear regret in this strategic setting when the arms respond in Nash Equilibrium (NE) (Proposition 3.3). This highlights the necessity of integrating mechanism design with online learning techniques to minimize regret in the presence of strategic arms.

β€’

We begin with the case where πœƒ βˆ— is known to the learner in advance (Section 4). This simplifies the problem setup, allowing us to establish fundamental concepts while highlighting the challenges of designing a sequential mechanism without payments. For this scenario, we propose the Greedy Grim Trigger Mechanism (GGTM), which incentivizes the arms to be approximately truthful while minimizing regret. We show that:

a)

Truthful reporting is an π’ͺ ~ ⁒ ( 𝑇 ) -NE for the arms (Theorem 4.1),

b)

GGTM has π’ͺ ~ ⁒ ( 𝐾 2 ⁒ 𝐾 ⁒ 𝑇 ) regret under every NE of the arms (Theorem 4.2).

β€’

Next, we consider the case where πœƒ βˆ— is unknown to the learner in advance (Section 5). Without access to the true contexts, estimating πœƒ βˆ— accurately appears intractable, as the arms can manipulate the estimation process. Surprisingly, we show that learning πœƒ βˆ— is not necessary for minimizing regret in the strategic linear contextual bandit. We construct confidence sets (which may not contain  πœƒ βˆ— ) to derive pessimistic and optimistic estimates of our expected reward. These estimates are used to construct the Optimistic Grim Trigger Mechanism (OptGTM). Despite possibly incorrect estimates of πœƒ βˆ— , OptGTM bounds the impact of misreported contexts on both regret and the utility of all arms. We show that:

a)

Truthfulness is an π’ͺ ~ ⁒ ( 𝑑 ⁒ 𝐾 ⁒ 𝑇 ) -NE for which OptGTM has regret π’ͺ ~ ⁒ ( 𝑑 ⁒ 𝐾 ⁒ 𝑇 ) (Theorem 5.1),

b)

OptGTM incurs at most π’ͺ ~ ⁒ ( 𝑑 ⁒ 𝐾 2 ⁒ 𝐾 ⁒ 𝑇 ) regret under every NE of the arms (Theorem 5.2).

β€’

Finally, we support our theoretical findings with simulations of strategic gaming behavior in response to OptGTM and LinUCB (Section 6). We simulate how strategic arms adapt what contexts to report over time by equipping the arms with decentralized gradient ascent and letting the arms (e.g., vendors) and the learner (e.g., platform) repeatedly interact over several epochs. The experiments confirm the effectiveness of OptGTM and illustrate the shortcomings of incentive-unaware algorithms, such as LinUCB.

2Related Work Linear Contextual Bandits.

In related work on linear contextual bandits with adversarial reward corruptions [45, 20, 3, 39], an adversary corrupts the reward observation in round 𝑑 by some amount 𝑐 𝑑 but not the observed contexts. In this problem, the optimal regret is given by Θ ⁒ ( 𝑑 ⁒ 𝑇 + 𝑑 ⁒ 𝐢 ) , where 𝐢 . .

βˆ‘ 𝑑 | 𝑐 𝑑 | is the adversary’s budget. To the best of our knowledge, adversarial context corruptions have only been studied by [8], who achieve π’ͺ ~ ⁒ ( 𝑑 ⁒ 𝐢 ~ ⁒ 𝑇 ) regret with 𝐢 ~ . .

βˆ‘ 𝑑 , 𝑖 βˆ₯ π‘₯ 𝑑 , 𝑖 βˆ— βˆ’ π‘₯ 𝑑 , 𝑖 βˆ₯ , where π‘₯ 𝑑 , 𝑖 βˆ— and π‘₯ 𝑑 , 𝑖 are the true and corrupted contexts, respectively. In contrast, we do not assume a bounded corruption budget so that these regret guarantees become vacuous (cf. Proposition 3.3). Moreover, instead of taking the worst-case perspective of purely adversarial manipulation, we assume that each arm is a self-interested agent maximizing their own utility.

Strategic Multi-Armed Bandits.

Braverman et al. [4] were the first to study a strategic variant of the multi-armed bandit problem and considered the case where the pulled arm privately receives the reward and shares only a fraction of it with the learner. An extension of this setting has recently been studied in [12]. In other lines of work, [13, 9] study the robustness of bandit learning against strategic manipulation, however, simply assume a bounded manipulation budget instead of performing mechanism design. [35, 11] consider multi-armed bandits with replicas where strategic agents can submit replicas of the same arm to increase the number of times one of their arms is pulled. Buening et al. [5] combine multi-armed bandits with mechanism design to discourage clickbait in online recommendation. In their model, each arm maximizes its total number of clicks and is characterized by a strategically chosen click-rate and a fixed post-click reward. However, all of these works substantially differ from our work in problem setup and/or methodology.

Modeling Incentives in Recommender Systems.

A complementary line of work studies content creator incentives in recommender systems [16, 27, 40, 41, 42, 22, 23, 21] and how algorithms shape the behavior of agents more generally [7]. These works primarily focus on modeling content creator behavior and studying content creator incentives under existing algorithms. Instead, our goal is the design of incentive-aware learning algorithms which incentivize content creators (arms) to act in a desirable fashion (truthfully) while maximizing the recommender system’s performance.

Strategic Learning.

We also want to mention the extensive literature on strategic learning [44, 14, 18, 43, 19, 26, 15] and strategic classification [17, 34, 36, 10]. Similarly to the model we study in this paper, the premise is that rational agents strategically respond to the learner’s algorithm (e.g., classifier) to obtain a desired outcome. However, the learner interacts with the agents only once and the agents are assumed to be myopic and to suffer a cost for, e.g., altering their features. Moreover, there is no competition among the agents like in the strategic linear contextual bandit. In contrast to these works, we wish to design a sequential (online learning) mechanism to incentivize truthfulness, which is only possible because we repeatedly interact with the same set of agents (i.e., arms).

3Strategic Linear Contextual Bandits

We study a strategic-variant of the linear contextual bandit problem, where 𝐾 strategic agents (i.e., arms) aim to maximize their number of pulls by misreporting privately observed contexts to the learner. The learner follows the usual objective of minimizing regret, i.e., maximizing cumulative rewards, despite not observing the true contexts. We here focus on the case where the strategic arms respond in Nash equilibrium to the learning algorithm. The interaction between the environment, the learning algorithm, and the arms is specified in Interaction Protocol 1.

Notice that the arms can manipulate the contexts that the learner observes (and the learner only observes these gamed contexts and never the actual contexts), but the rewards are generated w.r.t. the true contexts. Moreover, if all arms are non-strategic andβ€”irrespective of the learning algorithmβ€”report their features truthfully every round, i.e., π‘₯ 𝑑 , 𝑖

π‘₯ 𝑑 , 𝑖 βˆ— for all ( 𝑑 , 𝑖 ) ∈ [ 𝑇 ] Γ— [ 𝐾 ] , the problem reduces to the standard non-strategic linear contextual bandit.

1 Learner publicly commits to algorithm 𝑀 2 for  𝑑

1 , … , 𝑇  do 3       Every arm 𝑖 ∈ [ 𝐾 ] privately observes its context π‘₯ 𝑑 , 𝑖 βˆ— ∈ ℝ 𝑑 4       Every arm 𝑖 ∈ [ 𝐾 ] reports a (potentially gamed) context π‘₯ 𝑑 , 𝑖 ∈ ℝ 𝑑 to the learner 5       Learner observes the gamed contexts 𝒳 𝑑

{ π‘₯ 𝑑 , 1 , … , π‘₯ 𝑑 , 𝐾 } , selects arm 𝑖 𝑑 ∈ [ 𝐾 ] , and receives reward

π‘Ÿ 𝑑 , 𝑖 𝑑 := ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— ⟩ + πœ‚ 𝑑 ,

where πœ‚ 𝑑 is zero-mean sub-Gaussian noise. Note that the reward is generated with respect to the unknown parameter πœƒ βˆ— ∈ ℝ 𝑑 and the unobserved true context π‘₯ 𝑑 , 𝑖 𝑑 βˆ— . Interaction Protocol 1 Strategic Linear Contextual Bandits 3.1Strategic Arms and Nash Equilibrium

We assume that each arm 𝑖 reports a possibly gamed context π‘₯ 𝑑 , 𝑖 to the learner after observing its true context π‘₯ 𝑑 , 𝑖 βˆ— and potentially other information. For example, the arms may have prior knowledge of πœƒ βˆ— and observe the identity of the selected arm at the end of each round. However, we do not use any specific assumptions about the observational model of the arms. Our results can be viewed as a worst-case analysis over all such models. For concreteness, consider the case where the arms have prior knowledge of πœƒ βˆ— and, at the end of every round, observe which arm was selected and the generated reward.1

Let 𝜎 𝑖 be a (mixed) strategy of arm 𝑖 that is history-dependent and in every round 𝑑 maps from observed true contexts π‘₯ 𝑑 , 𝑖 βˆ— to a distribution over reported contexts π‘₯ 𝑑 , 𝑖 in ℝ 𝑑 . We define 𝜎 βˆ’ 𝑖 as the strategies of all arms except 𝑖 and define a strategy profile of the arms as 𝝈 . .

( 𝜎 1 , … , 𝜎 𝐾 ) . We call arm 𝑖 truthful if it truthfully reports its privately context every round, i.e., π‘₯ 𝑑 , 𝑖

π‘₯ 𝑑 , 𝑖 βˆ— for all 𝑑 ∈ [ 𝑇 ] . This truthful strategy is denoted 𝜎 𝑖 βˆ— and we let 𝝈 βˆ—

( 𝜎 1 βˆ— , … , 𝜎 𝐾 βˆ— ) .

We now formally define the objective of the arms. Let 𝑛 𝑇 ( 𝑖 ) . .

βˆ‘ 𝑑

1 𝑇 πŸ™ { 𝑖 𝑑

𝑖 } be the number of times arm 𝑖 is pulled by the learner’s algorithm 𝑀 . The objective of every arm is to maximize the expected number of times it is pulled by the algorithm given by

𝑒 𝑖 ( 𝑀 , 𝝈 ) . .

𝔼 𝑀 [ 𝑛 𝑇 ( 𝑖 ) ∣ 𝝈 ] ,

where we condition on the arm strategies 𝝈 as these will (typically) impact the algorithm’s decisions. We assume that the arms respond to the learning algorithm 𝑀 in Nash Equilibrium (NE).

Definition 3.1 (Nash Equilibrium).

We say that 𝝈

( 𝜎 1 , … , 𝜎 𝐾 ) forms a NE under the learner’s algorithm 𝑀 if for all 𝑖 ∈ [ 𝐾 ] and any deviating strategy 𝜎 𝑖 β€² :

𝔼 𝑀 ⁒ [ 𝑛 𝑇 ⁒ ( 𝑖 ) ∣ 𝜎 𝑖 , 𝜎 βˆ’ 𝑖 ] β‰₯ 𝔼 𝑀 ⁒ [ 𝑛 𝑇 ⁒ ( 𝑖 ) ∣ 𝜎 𝑖 β€² , 𝜎 βˆ’ 𝑖 ] .

Let NE ( 𝑀 ) . .

{ 𝝈 : 𝝈  is a NE under  𝑀 } be the set of NE under algorithm 𝑀 . We also consider πœ€ -NE, which relax the requirement that no arm has an incentive to deviate.

Definition 3.2 ( πœ€ -Nash Equilibrium).

We say that 𝝈

( 𝜎 1 , … , 𝜎 𝐾 ) forms a πœ€ -NE under algorithm 𝑀 if for all 𝑖 ∈ [ 𝐾 ] and any deviating strategy 𝜎 β€² :

𝔼 𝑀 ⁒ [ 𝑛 𝑇 ⁒ ( 𝑖 ) ∣ 𝜎 𝑖 , 𝜎 βˆ’ 𝑖 ] β‰₯ 𝔼 𝑀 ⁒ [ 𝑛 𝑇 ⁒ ( 𝑖 ) ∣ 𝜎 𝑖 β€² , 𝜎 βˆ’ 𝑖 ] βˆ’ πœ€ .

3.2Strategic Regret

In the strategic linear contextual bandit, the performance of an algorithm depends on the arm strategies that it incentivizes. Naturally, minimizing regret when the arms always report their context truthfully is easier than when contexts are manipulated adversarially. We are interested in the strategic regret of an algorithm 𝑀 when the arms act according to a Nash equilibrium under 𝑀 . Formally, for 𝝈 ∈ NE ⁒ ( 𝑀 ) the strategic regret of 𝑀 is defined as

𝑅 𝑇 ⁒ ( 𝑀 , 𝝈 )

𝔼 𝑀 , 𝝈 ⁒ [ βˆ‘ 𝑑

1 𝑇 ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— βˆ— ⟩ βˆ’ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— ⟩ ] ,

where 𝑖 𝑑 βˆ—

argmax 𝑖 ∈ [ 𝐾 ] ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 βˆ— ⟩ is the optimal arm in round 𝑑 . The regret guarantees of our algorithms hold uniformly over all NE that they induce, i.e., for max 𝝈 ∈ NE ⁒ ( 𝑀 ) ⁑ 𝑅 𝑇 ⁒ ( 𝑀 , 𝝈 ) .

Regularity Assumptions.

We allow for the true context vectors π‘₯ 𝑑 , 𝑖 βˆ— to be chosen adversarially by nature, and make the following assumptions about the linear contextual bandit model. We assume that both the context vectors and the rewards are bounded, i.e., max 𝑖 , 𝑗 ∈ [ 𝐾 ] ⁑ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 βˆ— βˆ’ π‘₯ 𝑑 , 𝑗 βˆ— ⟩ ≀ 1 and βˆ₯ π‘₯ 𝑑 , 𝑖 βˆ— βˆ₯ 2 ≀ 1 for all 𝑑 ∈ [ 𝑇 ] . Moreover, we assume a constant optimality gap. That is, letting Ξ” 𝑑 , 𝑖 . .

⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— βˆ— βˆ’ π‘₯ 𝑑 , 𝑖 βˆ— ⟩ , we assume that Ξ” . .

min 𝑑 , 𝑖 : Ξ” 𝑑 , 𝑖

0 Ξ” 𝑑 , 𝑖 is constant.

3.3The Necessity of Mechanism Design

The first question that arises in this strategic setup is whether mechanism design, i.e., actively aligning the arms’ incentives, is necessary to minimize regret. As expected, we find that this is the case. Standard algorithms, which are oblivious to the incentives they create, implicitly incentivize the arms to heavily misreport their contexts which makes minimizing regret virtually impossible.

We call a problem instance trivial if the algorithm that selects an arm uniformly at random every round achieves sublinear regret. Conversely, we call a problem instance non-trivial if the uniform selection suffers linear expected regret. We show that being incentive-unaware generally leads to linear regret in non-trivial instances (even when the learner has prior knowledge of  πœƒ βˆ— ).

Proposition 3.3.

On any non-trivial problem instance, the incentive-unaware greedy algorithm that in round 𝑑 plays 𝑖 𝑑

argmax 𝑖 ∈ [ 𝐾 ] ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 ⟩ (with ties broken uniformly) suffers linear regret Ξ© ⁒ ( 𝑇 ) when the arms act according to any Nash equilibrium under the incentive-unaware greedy algorithm. Note that the incentive-unaware greedy algorithm has knowledge of πœƒ βˆ— .

Similarly, algorithms for stochastic linear contextual bandits (LinUCB [6, 1]) and algorithms for linear contextual bandits with adversarial context corruptions (RobustBandit [8]) suffer linear regret when the arms act according to any Nash equilibrium that the algorithms incentivize.

Proof Sketch.

We demonstrate that the only NE for the arms lies in strategies that myopically maximize the probability of being selected in every round, which results in linear regret for the learner, because all arms always appear similarly good. The proof can be found in Appendix B. ∎

Another natural question to ask is whether exact incentive-compatibility is possible in the strategic linear contextual bandit. A learning algorithm is called incentive-compatible if truthfulness is a NE, i.e., reporting the true context π‘₯ 𝑑 , 𝑖

π‘₯ 𝑑 , 𝑖 βˆ— every round is maximizing each arm’s utility [30, 14]. For the interested reader, in Appendix A, we provide an incentive-compatible algorithm with constant regret in the fully deterministic case, where πœƒ βˆ— is known a priori as well as the rewards of pulled arms directly observable. However, when πœƒ βˆ— is unknown and/or the reward observations are subject to noise, we conjecture that exact incentive-compatibility (i.e., truthfulness is an exact NE, not πœ€ -NE) is irreconcilable with regret minimization (cf. Appendix A).

4Warm-Up: 𝜽 βˆ— is Known in Advance

There are a number of challenges in the strategic linear contextual bandit. The most significant one is the need to incentivize the arms to be (approximately) truthful while simultaneously minimizing regret by learning about πœƒ βˆ— and selecting the best arms, even when observing (potentially) manipulated contexts. Notably, Proposition 3.3 showed that if we fail to align the arms’ incentives, minimizing regret becomes impossible. Therefore, in the strategic linear contextual bandit, we must combine mechanism design with online learning techniques.

The uncertainty about πœƒ βˆ— poses a serious difficulty when trying to design such incentive-aware learning algorithms. As we only observe π‘₯ 𝑑 , 𝑖 and π‘Ÿ 𝑑 , 𝑖

⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 βˆ— ⟩ + πœ‚ 𝑑 , but do not observe the true context π‘₯ 𝑑 , 𝑖 βˆ— , accurate estimation of πœƒ βˆ— is extremely challenging (and arguably intractable). We go into more depth in Section 5 when we introduce the Optimistic Grim Trigger Mechanism. For now, we consider the special case when 𝜽 βˆ— is known to the learner in advance. This lets us highlight some of the challenges when connecting mechanism design with online learning in a less complex setting and introduce high-level ideas and concepts. When πœƒ βˆ— is known in advance, it can be instructive to consider what we refer to as the reported (expected) reward ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 ⟩ instead of the reported context vector π‘₯ 𝑑 , 𝑖 itself. Taking this perspective, when arm 𝑖 reports a 𝑑 -dimensional vector π‘₯ 𝑑 , 𝑖 , we simply think of arm 𝑖 reporting a scalar reward ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 ⟩ . In what follows, it will prove useful to keep this abstraction in mind.2

1 initialize: 𝐴 1

[ 𝐾 ] 2 for  𝑑

1 , … , 𝑇  do 3       Observe reported contexts π‘₯ 𝑑 , 1 , … , π‘₯ 𝑑 , 𝐾 4       Play the (active) arm with largest reported reward: 𝑖 𝑑

argmax 𝑖 ∈ 𝐴 𝑑 ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 ⟩ 5       Observe reward π‘Ÿ 𝑑 , 𝑖 𝑑 from playing arm 𝑖 𝑑 . 6       if  βˆ‘ β„“ ≀ 𝑑 : 𝑖 β„“

𝑖 𝑑 ⟨ πœƒ βˆ— , π‘₯ β„“ , 𝑖 𝑑 ⟩ > UCB 𝑑 ⁒ ( π‘Ÿ ^ 𝑑 , 𝑖 𝑑 )  then 7             Eliminate arm 𝑖 𝑑 from the active set: 𝐴 𝑑 + 1 ← 𝐴 𝑑 βˆ– { 𝑖 𝑑 } . 8      if  𝐴 𝑑 + 1

βˆ…  then 9             Stop playing any arm and receive zero reward for all remaining rounds. Mechanism 2 The Greedy Grim Trigger Mechanism ( GGTM ) 4.1The Greedy Grim Trigger Mechanism

One idea for a mechanism is to use a grim trigger. In repeated social dilemmas, the grim trigger strategy ensures cooperation among self-interested players by threatening with defection for all remaining rounds if the grim trigger condition is satisfied [28]. Typically, the grim trigger condition is defined so that it is immediately satisfied if a player defected at least once.

In the strategic contextual bandit, from the perspective of the learner, an arm can be considered to ’cooperate’ if it is reporting its context truthfully. In turn, an arm ’defects’ when it is reporting a gamed context. However, when an arm is reporting some context π‘₯ 𝑑 , 𝑖 we do not know whether this arm truthfully reported its context or not, because we do not have access to the true context π‘₯ 𝑑 , 𝑖 βˆ— . For this reason, we instead compare the expected reward ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 ⟩ and the true reward ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 βˆ— ⟩ . While we also cannot observe ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 βˆ— ⟩ directly, we do observe π‘Ÿ 𝑑 , 𝑖 . .

⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 βˆ— ⟩ + πœ‚ 𝑑 .

Grim Trigger Condition.

Intuitively, if for any arm 𝑖 the total expected reward βˆ‘ β„“ ≀ 𝑑 : 𝑖 β„“

𝑖 ⟨ πœƒ βˆ— , π‘₯ β„“ , 𝑖 ⟩ is larger than the total observed reward π‘Ÿ ^ 𝑑 , 𝑖 . .

βˆ‘ β„“ ≀ 𝑑 : 𝑖 β„“

𝑖 π‘Ÿ β„“ , 𝑖 , then arm 𝑖 must have been misreporting its contexts. However, π‘Ÿ β„“ , 𝑖 . .

⟨ πœƒ βˆ— , π‘₯ β„“ , 𝑖 βˆ— ⟩ + πœ‚ β„“ is random so that we instead use the optimistic estimate of the observed reward given by

UCB 𝑑 ( π‘Ÿ ^ 𝑑 , 𝑖 ) . .

βˆ‘ β„“ ≀ 𝑑 : 𝑖 β„“

𝑖 π‘Ÿ β„“ , 𝑖 + 2 𝑛 𝑑 ⁒ ( 𝑖 ) ⁒ log ⁑ ( 𝑇 )

(1)

where 2 ⁒ 𝑛 𝑑 ⁒ ( 𝑖 ) ⁒ log ⁑ ( 𝑇 ) is the confidence width which can be derived from Hoeffding’s inequality. To implement the grim trigger, we then eliminate arm 𝑖 in round 𝑑 if the total expected reward is larger than the optimistic estimate of the total observed reward, i.e.,

βˆ‘ β„“ ≀ 𝑑 : 𝑖 β„“

𝑖 ⟨ πœƒ βˆ— , π‘₯ β„“ , 𝑖 ⟩

UCB 𝑑 ⁒ ( π‘Ÿ ^ 𝑑 , 𝑖 ) .

Note that using the optimistic estimate of the total observed reward ensures that elimination is justified with high probability. Conversely, we can guarantee with high probability that we do not erroneously eliminate a truthful arm.

Selection Rule.

To complete the Greedy Grim Trigger Mechanism (GGTM, Mechanism 2), we then combine this with a greedy selection rule that pulls the arm with largest reported reward ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 ⟩ in round 𝑑 from the set of arms that we believe have been truthful so far. Interestingly, even though we here assumed πœƒ βˆ— to be known in advance, we see that GGTM still utilizes online learning techniques such as the optimistic estimate (1) to align the arms’ incentives.

It is also worth noting thatβ€”similar to its use in repeated social dilemmasβ€”our grim trigger mechanism is mutually destructive in the sense that eliminating an arm for all remaining rounds is inherently bad for the learner (and of course for the eliminated arm as well).3 Here lies the main challenge of the mechanism design and we must ensure that the arms are incentivized to β€œcooperate” (i.e., remain active) for a sufficiently long time.

4.2Regret Analysis of GGTM

In what follows, we assume that each arm’s strategy is restricted to reporting their ’reward’ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 ⟩ not strictly lower than their true (mean) reward ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 βˆ— ⟩ . It seems intuitive that no rational arm would ever under-report its value to the learner and make itself seem worse than it actually is. However, there are special cases, where under-reporting allows an arm to arbitrarily manipulate without detection. We discuss this later in Remark 4.3 and, more extensively, in Appendix C.

Assumption 1.

We assume that ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 ⟩ β‰₯ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 βˆ— ⟩ for all ( 𝑑 , 𝑖 ) ∈ [ 𝑇 ] Γ— [ 𝐾 ] .

We now demonstrate that GGTM approximately incentivizes the arms to be truthful in the sense that the truthful strategy profile 𝝈 βˆ— such that π‘₯ 𝑑 , 𝑖

π‘₯ 𝑑 , 𝑖 βˆ— for all ( 𝑑 , 𝑖 ) ∈ [ 𝑇 ] Γ— [ 𝐾 ] is an π’ͺ ~ ⁒ ( 𝑇 ) -NE under GGTM. When the arms always report truthfully and no arm is erroneously eliminated, the greedy selection rule naturally selects the best arm every round so that GGTM’s regret is constant.

Theorem 4.1.

Under the Greedy Grim Trigger Mechanism, being truthful is a π’ͺ ~ ⁒ ( 𝑇 ) -NE for the arms. The strategic regret of GGTM when the arms act according to this equilibrium is at most

𝑅 𝑇 ⁒ ( GGTM , 𝝈 βˆ— ) ≀ 1 / 𝑇 .

Proof Sketch..

By design of the grim trigger, it is straightforward to show that the probability that a truthful arm gets eliminated is at most 1 / 𝑇 2 . Moreover, the grim trigger ensures that no arm can β€˜poach’ selections from a truthful arm more than order 𝑇 times by misreporting its contexts. This achieves two things: (a) it protects truthful arms and guarantees that truthfulness is a good strategy, and (b) limits an arm’s profit from being untruthful. The proof can be found in Appendix D.2. ∎

Theorem 4.1 tells us that truthfulness is an approximate NE. We now also provide a more holistic strategic regret guarantee of π’ͺ ~ ⁒ ( 𝐾 2 ⁒ 𝐾 ⁒ 𝑇 ) in every Nash equilibrium under GGTM. Proving this is more complicated as the arms can profit from exploiting our uncertainty about their truthfulness (i.e., the looseness of the grim trigger).

Theorem 4.2.

The Greedy Grim Trigger Mechanism has strategic regret

𝑅 𝑇 ⁒ ( GGTM , 𝝈 )

π’ͺ ⁒ ( 𝐾 ⁒ 𝑇 ⁒ log ⁑ ( 𝑇 ) ⏟ cost of manipulation + 𝐾 2 ⁒ 𝐾 ⁒ 𝑇 ⁒ log ⁑ ( 𝑇 ) ⏟ cost of mechanism design )

(2)

for every 𝛔 ∈ NE ⁒ ( GGTM ) . Hence, max 𝛔 ∈ NE ⁒ ( GGTM ) ⁑ 𝑅 𝑇 ⁒ ( GGTM , 𝛔 )

π’ͺ ~ ⁒ ( 𝐾 2 ⁒ 𝐾 ⁒ 𝑇 ) .

Proof Sketch..

The regret analysis is notably more complicated than the one in Theorem 4.1, as we must bound the regret due to the arms exploiting our uncertainty as well as the cost of committing to the grim trigger. Both of these quantities do not play a role when the arms always report truthfully (like in Theorem 4.1). A complete proof can be found in Appendix D. ∎

The regret bound (2) suggests that there are two sources of regret. The first term is due to our mechanism design being approximate (relying on estimates), which leaves room for the arms to exploit our uncertainty and misreport their contexts to obtain additional selections. The second part of (2) is the cost of the mechanism design, i.e., the cost of committing to the grim trigger. We suffer constant regret any round in which the round-optimal arm is no longer in the active set. In the worst-case, this quantity is of order 𝐾 2 ⁒ 𝐾 ⁒ 𝑇 .

Remark 4.3.

We want to briefly comment on Assumption 1. It appears intuitive that any rational arm would never under-report its value, i.e., make itself look worse than it actually is. However, in Appendix C, we provide a simple example where occasionally under-reporting its value allows an arm to simulate an environment where it is always optimal, even though it is in fact only optimal half of the time. We will explain in the example that without additional strong assumptions on the noise distribution the two environments are indistinguishable so that such manipulation by the arms appears unavoidable when trying to maximize rewards.

5The Optimistic Grim Trigger Mechanism

The problem of estimating the unknown parameter πœƒ βˆ— appears daunting given that the arms can strategically alter their contexts to manipulate our estimate of πœƒ βˆ— to their advantage. In fact, imagine an arm manipulating its contexts orthogonal to πœƒ βˆ— so that ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 βˆ’ π‘₯ 𝑑 , 𝑖 βˆ— ⟩

0 but π‘₯ 𝑑 , 𝑖 β‰  π‘₯ 𝑑 , 𝑖 βˆ— . Observing only π‘₯ 𝑑 , 𝑖 and π‘Ÿ 𝑑 , 𝑖 . .

⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 βˆ— ⟩ + πœ‚ 𝑑 , our estimate of πœƒ βˆ— becomes biased and could be arbitrarily far off the true parameter πœƒ βˆ— even though the gamed context and true context have the same reward w.r.t.  πœƒ βˆ— . This is also the case more generally. Since we observe neither πœƒ βˆ— nor π‘₯ 𝑑 , 𝑖 βˆ— , any observed combination of π‘₯ 𝑑 , 𝑖 and π‘Ÿ 𝑑 , 𝑖 will β€œmake sense” to us. But, how can we incentivize the arms to report truthfully and minimize regret despite incorrect estimates of  πœƒ βˆ— ?

Our key observation is that learning πœƒ βˆ— is not necessary to incentivize the arms or minimize regret; it appears to be a hopeless endeavour after all. The idea of the Optimistic Grim Trigger Mechanism (OptGTM, Mechanism 3) is to construct pessimistic estimates of the total reward we expected from pulling an arm. Importantly, we can construct such pessimistic estimates of the expected (i.e., β€œreported”) reward even when the contexts are manipulated. OptGTM then threatens arms with elimination if our pessimistic estimate of the expected reward exceeds the optimistic estimate of the observed reward. Interestingly, this does not relate to the amount of corruption in the feature space and, in fact, βˆ‘ 𝑑 , 𝑖 βˆ₯ π‘₯ 𝑑 , 𝑖 βˆ’ π‘₯ 𝑑 , 𝑖 βˆ— βˆ₯ 2 could become arbitrarily large. However, it does bound the effect of each arm’s strategic manipulation on the decisions we make and thereby allows for effective incentive design and regret minimization.

1 initialize: 𝐴 1

[ 𝐾 ] 2 for  𝑑

1 , … , 𝑇  do 3       Observe reported contexts 𝒳 𝑑

{ π‘₯ 𝑑 , 1 , … , π‘₯ 𝑑 , 𝐾 } . 4       Play the active arm with largest reported optimistic reward
𝑖 𝑑

argmax 𝑖 ∈ 𝐴 𝑑 UCB 𝑑 , 𝑖 ⁒ ( π‘₯ 𝑑 , 𝑖 ) .
5      [-0.1cm] Receive reward π‘Ÿ 𝑑 , 𝑖 𝑑 from playing arm 𝑖 𝑑 . 6       if  βˆ‘ β„“ ≀ 𝑑 : 𝑖 β„“

𝑖 𝑑 LCB β„“ , 𝑖 𝑑 ⁒ ( π‘₯ β„“ , 𝑖 𝑑 ) > UCB 𝑑 ⁒ ( π‘Ÿ ^ 𝑑 , 𝑖 𝑑 )  then 7            Eliminate arm 𝑖 𝑑 from the active set: 𝐴 𝑑 + 1 ← 𝐴 𝑑 βˆ– { 𝑖 𝑑 } . 8      if  𝐴 𝑑 + 1

βˆ…  then 9             Stop playing any arm and receive zero reward for all remaining rounds. Mechanism 3 The Optimistic Grim Trigger Mechanism ( OptGTM )

To construct pessimistic (and optimistic) estimates of the expected reward, we use independent estimators πœƒ ^ 𝑑 , 𝑖 and confidence sets 𝐢 𝑑 , 𝑖 around πœƒ ^ 𝑑 , 𝑖 , which do not take into account that the contexts are potentially manipulated. That is, we have a separate estimator and confidence set for each arm 𝑖 ∈ [ 𝐾 ] . This will prevent one arm influencing the elimination of another. It also stops collusive arm behavior, where a majority group of the arms could dominate and steer our estimation process.

Confidence Sets.

For every arm 𝑖 ∈ [ 𝐾 ] we define the least-squares estimator πœƒ ^ 𝑑 , 𝑖 w.r.t. its reported contexts and observed rewards before round 𝑑 as

πœƒ ^ 𝑑 , 𝑖

argmin πœƒ ∈ ℝ 𝑑 ( βˆ‘ β„“ < 𝑑 : 𝑖 β„“

𝑖 ( ⟨ πœƒ , π‘₯ β„“ , 𝑖 ⟩ βˆ’ π‘Ÿ β„“ , 𝑖 ) 2 + πœ† ⁒ βˆ₯ πœƒ βˆ₯ 2 2 ) ,

(3)

where πœ† > 0 . We then define the confidence set 𝐢 𝑑 , 𝑖 . .

{ πœƒ ∈ ℝ 𝑑 : βˆ₯ πœƒ ^ 𝑑 , 𝑖 βˆ’ πœƒ βˆ₯ 𝑉 𝑑 , 𝑖 2 ≀ 𝛽 𝑑 , 𝑖 } where 𝛽 𝑑 , 𝑖 . .

π’ͺ ( 𝑑 log ( 𝑛 𝑑 ( 𝑖 ) ) ) is the confidence size. Here, 𝑉 𝑑 , 𝑖 is the covariance matrix of reported contexts of arm 𝑖 given by 𝑉 1 , 𝑖 . .

πœ† 𝐼 and 𝑉 𝑑 , 𝑖 . .

πœ† 𝐼 + βˆ‘ β„“ < 𝑑 : 𝑖 β„“

𝑖 π‘₯ β„“ , 𝑖 π‘₯ β„“ , 𝑖 ⊀ .4

It is well-known that if the contexts were always reported truthfully, i.e., π‘₯ 𝑑 , 𝑖

π‘₯ 𝑑 , 𝑖 βˆ— , then with high probability πœƒ βˆ— ∈ 𝐢 𝑑 , 𝑖 . But, if the sequence of reported contexts π‘₯ 𝑑 , 𝑖 substantially differs from the true contexts π‘₯ 𝑑 , 𝑖 βˆ— there is no (high probability) guarantee that the true parameter πœƒ βˆ— is element in 𝐢 𝑑 , 𝑖 . In the literature on learning with adversarial corruptions (in linear contextual bandits), the standard approach to deal with this is to widen the confidence set. However, for this to be effective we would need to assume a sufficiently small corruption budget for the arms and prior knowledge of the total amount of corruption, both of which we explicitly do not assume here.

Slightly overloading notation, we instead define the optimistic and pessimistic estimate of the expected reward of a context vector π‘₯ w.r.t. arm 𝑖 as

UCB 𝑑 , 𝑖 ( π‘₯ ) . .

⟨ πœƒ ^ 𝑑 , 𝑖 , π‘₯ ⟩ + 𝛽 𝑑 , 𝑖 βˆ₯ π‘₯ βˆ₯ 𝑉 𝑑 , 𝑖 βˆ’ 1  and  LCB 𝑑 , 𝑖 ( π‘₯ ) . .

⟨ πœƒ ^ 𝑑 , 𝑖 , π‘₯ ⟩ βˆ’ 𝛽 𝑑 , 𝑖 βˆ₯ π‘₯ βˆ₯ 𝑉 𝑑 , 𝑖 βˆ’ 1 .

We chose to state these estimates using additive bonuses. However, it can be convenient to think of them as UCB 𝑑 , 𝑖 ⁒ ( π‘₯ ) β‰ˆ max πœƒ ∈ 𝐢 𝑑 , 𝑖 ⁑ ⟨ πœƒ , π‘₯ ⟩ and LCB 𝑑 , 𝑖 ⁒ ( π‘₯ ) β‰ˆ min πœƒ ∈ 𝐢 𝑑 , 𝑖 ⁑ ⟨ πœƒ , π‘₯ ⟩ .

Grim Trigger Condition.

In round 𝑑 ∈ [ 𝑇 ] , we eliminate arm 𝑖 from 𝐴 𝑑 if the pessimistic estimate using the reports is larger than the optimistic estimate using the total observed reward, i.e.,

βˆ‘ β„“ ≀ 𝑑 : 𝑖 β„“

𝑖 ( ⟨ πœƒ ^ β„“ , 𝑖 , π‘₯ β„“ , 𝑖 ⟩ βˆ’ 𝛽 β„“ ⁒ βˆ₯ π‘₯ β„“ , 𝑖 βˆ₯ 𝑉 β„“ , 𝑖 βˆ’ 1 ) > βˆ‘ β„“ ≀ 𝑑 : 𝑖 β„“

𝑖 π‘Ÿ β„“ , 𝑖 + 2 ⁒ 𝑛 𝑑 ⁒ ( 𝑖 ) ⁒ log ⁑ ( 𝑇 ) .

(4)

In other words, βˆ‘ β„“ ≀ 𝑑 : 𝑖 β„“

𝑖 LCB β„“ , 𝑖 ⁒ ( π‘₯ β„“ , 𝑖 )

UCB 𝑑 ⁒ ( π‘Ÿ ^ 𝑑 , 𝑖 ) .

Examining the left side of (4), the careful reader may wonder why we do not simply use our latest and arguably best estimate πœƒ ^ 𝑑 , 𝑖 , but instead the whole sequence of β€œout-dated” estimators πœƒ ^ β„“ , 𝑖 from previous rounds. In fact, this is crucial for the grim trigger. Using πœƒ ^ 𝑑 , 𝑖 renders the grim trigger condition ineffective, because, by definition, πœƒ ^ 𝑑 , 𝑖 is the (least-squares) minimizer (3) of the difference between βˆ‘ β„“ ≀ 𝑑 : 𝑖 β„“

𝑖 ⟨ πœƒ ^ 𝑑 , 𝑖 , π‘₯ β„“ , 𝑖 ⟩ and βˆ‘ β„“ ≀ 𝑑 : 𝑖 β„“

𝑖 π‘Ÿ β„“ , 𝑖 . Hence, when using πœƒ ^ 𝑑 , 𝑖 the grim trigger condition may not be satisfied even when the arms significantly and repeatedly misreport their contexts.

Selection Rule.

We complete the OptGTM algorithm by selecting arms optimistically with respect to each arm’s own estimator and confidence set. That is, OptGTM selects the active arm with maximal optimistic (expected) reward UCB 𝑑 , 𝑖 ( π‘₯ 𝑑 , 𝑖 ) . .

⟨ πœƒ ^ 𝑑 , 𝑖 , π‘₯ 𝑑 , 𝑖 ⟩ + 𝛽 𝑑 , 𝑖 βˆ₯ π‘₯ 𝑑 , 𝑖 βˆ₯ 𝑉 𝑑 , 𝑖 βˆ’ 1 in round 𝑑 . We see that the grim trigger (4) incentivizes arms to ensure that over the course of all rounds LCB 𝑑 , 𝑖 ⁒ ( π‘₯ 𝑑 , 𝑖 ) is not much smaller than π‘Ÿ 𝑑 , 𝑖 . .

⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 βˆ— ⟩ + πœ‚ 𝑑 . Hence, UCB 𝑑 , 𝑖 ⁒ ( π‘₯ 𝑑 , 𝑖 ) is also not substantially smaller than the true mean reward ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 βˆ— ⟩ . This suggests that playing optimistically is a good strategy for the learner as long as the selected arm does not satisfy (4).

5.1Regret Analysis of OptGTM

When πœƒ βˆ— was known to the learner in advance, we assumed that the arms never report a value smaller than their true value, i.e., ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 ⟩ β‰₯ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 βˆ— ⟩ for all ( 𝑑 , 𝑖 ) ∈ [ 𝑇 ] Γ— [ 𝐾 ] . Now, when πœƒ βˆ— is unknown to the learner, we similarly assume that the arms do not report their optimistic value less than their true value. Again, it seems intuitive that in any given round, no arm would under-report its worth.

Assumption 2.

We assume that max πœƒ ∈ 𝐢 𝑑 , 𝑖 ⁑ ⟨ πœƒ , π‘₯ 𝑑 , 𝑖 ⟩ β‰₯ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 βˆ— ⟩ for all ( 𝑑 , 𝑖 ) ∈ [ 𝑇 ] Γ— [ 𝐾 ] .

We find that OptGTM approximately incentivizes the arms to be truthful and, when the arms report truthfully, OptGTM suffers at most π’ͺ ~ ⁒ ( 𝑑 ⁒ 𝐾 ⁒ 𝑇 ) regret.

Theorem 5.1.

Under the Optimistic Grim Trigger Mechanism, being truthful is a π’ͺ ~ ⁒ ( 𝑑 ⁒ 𝐾 ⁒ 𝑇 ) -NE. When the arms report truthfully, the strategic regret of OptGTM under this approximate NE is at most

𝑅 𝑇 ⁒ ( OptGTM , 𝝈 βˆ— )

π’ͺ ~ ⁒ ( 𝑑 ⁒ 𝐾 ⁒ 𝑇 ) .

The optimal regret in standard non-strategic linear contextual bandits is Θ ⁒ ( 𝑑 ⁒ 𝑇 ) so that OptGTM is optimal up to a factor of 𝐾 (and logarithmic factors) when the arms report truthfully. The additional factor of 𝐾 is caused by the fact that OptGTM maintains independent estimates for each arm. We now also provide a strategic regret bound for every NE of the arms under OptGTM.

Theorem 5.2.

The Optimistic Grim Trigger Mechanism has strategic regret

𝑅 𝑇 ⁒ ( OptGTM , 𝝈 )

π’ͺ ⁒ ( 𝑑 ⁒ log ⁑ ( 𝑇 ) ⁒ 𝐾 ⁒ 𝑇 + 𝑑 ⁒ log ⁑ ( 𝑇 ) ⁒ 𝐾 2 ⁒ 𝐾 ⁒ 𝑇 ) .

for every 𝛔 ∈ NE ⁒ ( OptGTM ) . Hence, max 𝛔 ∈ NE ⁒ ( OptGTM ) ⁑ 𝑅 𝑇 ⁒ ( OptGTM , 𝛔 )

π’ͺ ~ ⁒ ( 𝑑 ⁒ 𝐾 2 ⁒ 𝐾 ⁒ 𝑇 ) .

The proof ideas of Theorem 5.1 and Theorem 5.2 are similar to their counterparts in Section 4. The main difference lies in a more technically challenging analysis of the grim trigger condition (4). We also see that in contrast to non-strategic linear contextual bandits, where the regret typically does not depend on the number of arms 𝐾 , Theorem 5.1 and Theorem 5.2 include a factor of 𝐾 and 𝐾 2 ⁒ 𝐾 , respectively. A dependence on 𝐾 is expected due to the strategic nature of the arms which forces us to explicitly incentivize each arm to be truthful. However, we conjecture that the regret bound in Theorem 5.2 is not tight in 𝐾 and expect the optimal dependence on the number of arms to be 𝐾 . The proofs of Theorem 5.1 and Theorem 5.2 can be found in Appendix E.

6Experiments: Simulating Strategic Context Manipulation

We here experimentally analyze the efficacy of OptGTM when the arms strategically manipulate their contexts in response to our learning algorithm. We compare the performance of OptGTM with the traditional LinUCB algorithm [6, 1], whichβ€”as shown in Proposition 3.3β€”implicitly incentivizes the arms to manipulate their contexts and, as a result, is expected to suffer large regret when the arms are strategic.

Contrary to the assumption of arms playing in NE, we here model the strategic arm behavior by letting the arms update their strategy (i.e., what contexts to report) based on past interactions with the algorithms. More precisely, we assume that the strategic arms interact with the deployed algorithm (i.e., OptGTM or LinUCB) over the course of 20 epochs, with each epoch consisting of 𝑇

10 ⁒ π‘˜ rounds. At the end of each epoch, every arm then updates its strategy using gradient ascent w.r.t. its utility. Importantly, this approach requires no prior knowledge from the arms, as they learn entirely through sequential interaction. This does not necessarily lead to equilibrium strategies, but instead serves as a way to study the performance and the implied incentivizes of OptGTM and LinUCB under a natural model of strategic gaming behavior.

(a)Total strategic regret 𝑅 𝑇 as the arms adapt their strategies to the deployed algorithm over the course of 20 epochs. (b)Epoch 0 (Truthful Arms): Regret as a function of 𝑑 before the arms have interacted with the deployed algorithm. (c)Epoch 20 (Strategic Arms): Regret as a function of 𝑑 after the arms have interacted with the deployed algorithm. Figure 1:Comparison of the strategic regret of OptGTM and LinUCB. The strategic arms adapt their strategies gradually over the course of 20 epochs. OptGTM performs similarly across all epochs, whereas LinUCB performs increasingly worse as the arms adapt to the algorithm (Figure 1(a)). Figure 1(b) and 1(c) provide a closer look at the regret of the algorithms across the 𝑇 rounds in the initial epoch, where the arms are truthful, and the final epoch after the arms have adapted to the algorithms. Experimental Setup. Figure 2:Total context manipulation βˆ‘ 𝑑 , 𝑖 βˆ₯ π‘₯ 𝑑 , 𝑖 βˆ— βˆ’ π‘₯ 𝑑 , 𝑖 βˆ₯ 2 . Figure 3:Utility of the arms for each of the 10 runs.

We associate each arm with a true feature vector 𝑦 𝑖 βˆ— ∈ ℝ 𝑑 1 (e.g., product features) and randomly sample a sequence of user vectors 𝑐 𝑑 ∈ ℝ 𝑑 2 (i.e., customer features). We assume that every arm can alter its feature vector 𝑦 𝑖 βˆ— by reporting some other vector 𝑦 𝑖 , but cannot alter the user contexts  𝑐 𝑑 . We use a feature mapping πœ‘ ⁒ ( 𝑐 𝑑 , 𝑦 𝑖 )

π‘₯ 𝑑 , 𝑖 to map the reported features 𝑦 𝑖 ∈ ℝ 𝑑 1 and the user features 𝑐 𝑑 ∈ ℝ 𝑑 2 to an arm-specific context π‘₯ 𝑑 , 𝑖 ∈ ℝ 𝑑 that the algorithm observes. At the end of every epoch, each arm then performs an approximated gradient step on 𝑦 𝑖 w.r.t. its utility, i.e., the number of times it is selected. We let 𝐾

5 and 𝑑

𝑑 1

𝑑 2

5 and average the results over 10 runs.

Results.

In Figure 1(a), we observe that OptGTM performs similarly well across all epochs, which suggests that OptGTM successfully discourages the emergence of harmful gaming behavior. In contrast, as the arms adapt their strategies (i.e., what features to report), LinUCB suffers increasingly more regret and almost performs as badly as uniform sampling in the final epoch. In epoch 0, when the all arms are truthful, i.e., are non-strategic, LinUCB performs better than OptGTM (Figure 1(b)). This is expected as OptGTM suffers additional regret due to maintaining independent estimates of πœƒ βˆ— for each arm (as a mechanism to incentivize truthfulness). However, OptGTM significantly outperforms LinUCB as the arms strategically adapt, which is most prominent in the final epoch (Figure 1(c)). Interestingly, as already suggested in Section 5, OptGTM cannot prevent manipulation in the feature space (see Figure 3). However, OptGTM does manage to bound the effect of the manipulation on the regret (Figure 1(a)) and, most importantly, the effect on the utility of the arms as well(Figure 3). As a result, the arms are discouraged from gaming their contexts heavily and the context manipulation has only a minor effect on the actions taken by OptGTM.

7Discussion

We study a strategic variant of the linear contextual bandit problem, where the arms strategically misreport privately observed contexts to maximize their selection frequency. To address this, we design two online learning mechanisms: the Greedy and the Optimistic Grim Trigger Mechanism, for the scenario where πœƒ βˆ— is known and unknown, respectively. We demonstrate that our mechanisms incentivize the arms to be approximately truthful and, in doing so, effectively minimize regret. More generally, with this work, we aim to advance our understanding of problems at the intersection of online learning and mechanism design. As the digital landscape, including online platforms and marketplaces, becomes increasingly agentic and dominated by self-interested agents, it will be crucial to understand the incentives created by learning algorithms and to align these incentives while optimizing for performance.

Limitations.

One limitation is the otherwise intuitive assumption that the arms do not under-report their value to the learner (Assumption 1 and Assumption 2). Secondly, we believe that the factor of 𝐾 2 in the universal regret guarantees of Theorem 4.2 and Theorem 5.2 is suboptimal and we conjecture that the optimal worst-case strategic regret is given by π’ͺ ⁒ ( 𝑑 ⁒ 𝐾 ⁒ 𝑇 ) . We leave this investigation for future work.

Acknowledgements

Thomas Kleine Buening is supported by the UKRI Prosperity Partnership Scheme (Project FAIR). Haifeng Xu is supported in part by the Army Research Office Award W911NF-23-1-0030, the ONR Award N00014-23-1-2802 and the NSF Award CCF-2303372.

References Abbasi-Yadkori et al. [2011] ↑ Yasin Abbasi-Yadkori, DΓ‘vid PΓ‘l, and Csaba SzepesvΓ‘ri.Improved algorithms for linear stochastic bandits.Advances in neural information processing systems, 24, 2011. Auer [2002] ↑ Peter Auer.Using confidence bounds for exploitation-exploration trade-offs.Journal of Machine Learning Research, 3(Nov):397–422, 2002. Bogunovic et al. [2021] ↑ Ilija Bogunovic, Arpan Losalka, Andreas Krause, and Jonathan Scarlett.Stochastic linear bandits robust to adversarial attacks.In International Conference on Artificial Intelligence and Statistics, pages 991–999. PMLR, 2021. Braverman et al. [2019] ↑ Mark Braverman, Jieming Mao, Jon Schneider, and S Matthew Weinberg.Multi-armed bandit problems with strategic arms.In Conference on Learning Theory, pages 383–416. PMLR, 2019. Buening et al. [2023] ↑ Thomas Kleine Buening, Aadirupa Saha, Christos Dimitrakakis, and Haifeng Xu.Bandits meet mechanism design to combat clickbait in online recommendation.In The Twelfth International Conference on Learning Representations, 2023. Chu et al. [2011] ↑ Wei Chu, Lihong Li, Lev Reyzin, and Robert Schapire.Contextual bandits with linear payoff functions.In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pages 208–214. JMLR Workshop and Conference Proceedings, 2011. Dean et al. [2024] ↑ Sarah Dean, Evan Dong, Meena Jagadeesan, and Liu Leqi.Accounting for ai and users shaping one another: The role of mathematical models.arXiv preprint arXiv:2404.12366, 2024. Ding et al. [2022] ↑ Qin Ding, Cho-Jui Hsieh, and James Sharpnack.Robust stochastic linear contextual bandits under adversarial attacks.In International Conference on Artificial Intelligence and Statistics, pages 7111–7123. PMLR, 2022. Dong et al. [2022] ↑ Jing Dong, Ke Li, Shuai Li, and Baoxiang Wang.Combinatorial bandits under strategic manipulations.In Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining, pages 219–229, 2022. Dong et al. [2018] ↑ Jinshuo Dong, Aaron Roth, Zachary Schutzman, Bo Waggoner, and Zhiwei Steven Wu.Strategic classification from revealed preferences.In Proceedings of the 2018 ACM Conference on Economics and Computation, pages 55–70, 2018. Esmaeili et al. [2023a] ↑ Seyed Esmaeili, MohammadTaghi Hajiaghayi, and Suho Shin.Replication-proof bandit mechanism design.arXiv preprint arXiv:2312.16896, 2023a. Esmaeili et al. [2023b] ↑ Seyed A Esmaeili, Suho Shin, and Aleksandrs Slivkins.Robust and performance incentivizing algorithms for multi-armed bandits with strategic agents.arXiv preprint arXiv:2312.07929, 2023b. Feng et al. [2020] ↑ Zhe Feng, David Parkes, and Haifeng Xu.The intrinsic robustness of stochastic bandits to strategic manipulation.In International Conference on Machine Learning, pages 3092–3101. PMLR, 2020. Freeman et al. [2020] ↑ Rupert Freeman, David Pennock, Chara Podimata, and Jennifer Wortman Vaughan.No-regret and incentive-compatible online learning.In International Conference on Machine Learning, pages 3270–3279. PMLR, 2020. Gast et al. [2020] ↑ Nicolas Gast, Stratis Ioannidis, Patrick Loiseau, and Benjamin Roussillon.Linear regression from strategic data sources.ACM Transactions on Economics and Computation (TEAC), 8(2):1–24, 2020. Ghosh and Hummel [2013] ↑ Arpita Ghosh and Patrick Hummel.Learning and incentives in user-generated content: Multi-armed bandits with endogenous arms.In Proceedings of the 4th conference on Innovations in Theoretical Computer Science, pages 233–246, 2013. Hardt et al. [2016] ↑ Moritz Hardt, Nimrod Megiddo, Christos Papadimitriou, and Mary Wootters.Strategic classification.In Proceedings of the 2016 ACM conference on innovations in theoretical computer science, pages 111–122, 2016. Harris et al. [2022] ↑ Keegan Harris, Dung Daniel T Ngo, Logan Stapleton, Hoda Heidari, and Steven Wu.Strategic instrumental variable regression: Recovering causal relationships from strategic responses.In International Conference on Machine Learning, pages 8502–8522. PMLR, 2022. Harris et al. [2023] ↑ Keegan Harris, Chara Podimata, and Steven Z Wu.Strategic apple tasting.Advances in Neural Information Processing Systems, 36:79918–79945, 2023. He et al. [2022] ↑ Jiafan He, Dongruo Zhou, Tong Zhang, and Quanquan Gu.Nearly optimal algorithms for linear contextual bandits with adversarial corruptions.arXiv preprint arXiv:2205.06811, 2022. Hron et al. [2022] ↑ Jiri Hron, Karl Krauth, Michael I Jordan, Niki Kilbertus, and Sarah Dean.Modeling content creator incentives on algorithm-curated platforms.arXiv preprint arXiv:2206.13102, 2022. Hu et al. [2023] ↑ Xinyan Hu, Meena Jagadeesan, Michael I Jordan, and Jacob Steinhard.Incentivizing high-quality content in online recommender systems.arXiv preprint arXiv:2306.07479, 2023. Jagadeesan et al. [2024] ↑ Meena Jagadeesan, Nikhil Garg, and Jacob Steinhardt.Supply-side equilibria in recommender systems.Advances in Neural Information Processing Systems, 36, 2024. Lattimore and SzepesvΓ‘ri [2020] ↑ Tor Lattimore and Csaba SzepesvΓ‘ri.Bandit algorithms.Cambridge University Press, 2020. Li et al. [2010] ↑ Lihong Li, Wei Chu, John Langford, and Robert E Schapire.A contextual-bandit approach to personalized news article recommendation.In Proceedings of the 19th international conference on World wide web, pages 661–670, 2010. Liu and Chen [2016] ↑ Yang Liu and Yiling Chen.A bandit framework for strategic regression.Advances in Neural Information Processing Systems, 29, 2016. Liu and Ho [2018] ↑ Yang Liu and Chien-Ju Ho.Incentivizing high quality user contributions: New arm generation in bandit learning.In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018. Macy and Flache [2002] ↑ Michael W Macy and Andreas Flache.Learning dynamics in social dilemmas.Proceedings of the National Academy of Sciences, 99(suppl_3):7229–7236, 2002. Malaga [2008] ↑ Ross A Malaga.Worst practices in search engine optimization.Communications of the ACM, 51(12):147–150, 2008. Mansour et al. [2015] ↑ Yishay Mansour, Aleksandrs Slivkins, and Vasilis Syrgkanis.Bayesian incentive-compatible bandit exploration.In Proceedings of the Sixteenth ACM Conference on Economics and Computation, pages 565–582, 2015. Pagan et al. [2023] ↑ NicolΓ² Pagan, Joachim Baumann, Ezzat Elokda, Giulia De Pasquale, Saverio Bolognani, and AnikΓ³ HannΓ‘k.A classification of feedback loops and their relation to biases in automated decision-making systems.In Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, pages 1–14, 2023. Prawesh and Padmanabhan [2014] ↑ Shankar Prawesh and Balaji Padmanabhan.The β€œmost popular news” recommender: Count amplification and manipulation resistance.Information Systems Research, 25(3):569–589, 2014. Resnick and Sami [2007] ↑ Paul Resnick and Rahul Sami.The influence limiter: provably manipulation-resistant recommender systems.In Proceedings of the 2007 ACM conference on Recommender systems, pages 25–32, 2007. Rosenfeld and Rosenfeld [2023] ↑ Elan Rosenfeld and Nir Rosenfeld.One-shot strategic classification under unknown costs.arXiv preprint arXiv:2311.02761, 2023. Shin et al. [2022] ↑ Suho Shin, Seungjoon Lee, and Jungseul Ok.Multi-armed bandit algorithm against strategic replication.In International Conference on Artificial Intelligence and Statistics, pages 403–431. PMLR, 2022. Sundaram et al. [2023] ↑ Ravi Sundaram, Anil Vullikanti, Haifeng Xu, and Fan Yao.Pac-learning for strategic classification.Journal of Machine Learning Research, 24(192):1–38, 2023. Tewari and Murphy [2017] ↑ Ambuj Tewari and Susan A Murphy.From ads to interventions: Contextual bandits in mobile health.Mobile health: sensors, analytic methods, and applications, pages 495–517, 2017. Wang et al. [2024] ↑ Zongwei Wang, Min Gao, Junliang Yu, Hao Ma, Hongzhi Yin, and Shazia Sadiq.Poisoning attacks against recommender systems: A survey.arXiv preprint arXiv:2401.01527, 2024. Wei et al. [2022] ↑ Chen-Yu Wei, Christoph Dann, and Julian Zimmert.A model selection approach for corruption robust reinforcement learning.In International Conference on Algorithmic Learning Theory, pages 1043–1096. PMLR, 2022. Yao et al. [2023] ↑ Fan Yao, Chuanhao Li, Denis Nekipelov, Hongning Wang, and Haifeng Xu.How bad is top- π‘˜ recommendation under competing content creators?In International Conference on Machine Learning, pages 39674–39701. PMLR, 2023. Yao et al. [2024a] ↑ Fan Yao, Chuanhao Li, Karthik Abinav Sankararaman, Yiming Liao, Yan Zhu, Qifan Wang, Hongning Wang, and Haifeng Xu.Rethinking incentives in recommender systems: Are monotone rewards always beneficial?Advances in Neural Information Processing Systems, 36, 2024a. Yao et al. [2024b] ↑ Fan Yao, Yiming Liao, Mingzhe Wu, Chuanhao Li, Yan Zhu, James Yang, Qifan Wang, Haifeng Xu, and Hongning Wang.User welfare optimization in recommender systems with competing content creators.arXiv preprint arXiv:2404.18319, 2024b. Yu et al. [2022] ↑ Mengxin Yu, Zhuoran Yang, and Jianqing Fan.Strategic decision-making in the presence of information asymmetry: Provably efficient rl with algorithmic instruments.arXiv preprint arXiv:2208.11040, 2022. Zhang and Conitzer [2021] ↑ Hanrui Zhang and Vincent Conitzer.Incentive-aware pac learning.In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 5797–5804, 2021. Zhao et al. [2021] ↑ Heyang Zhao, Dongruo Zhou, and Quanquan Gu.Linear contextual bandits with adversarial corruptions.arXiv preprint arXiv:2110.12615, 2021. Appendix ARemarks on Incentive-Compatible No-Regret Algorithms

In Section 3, we conjectured that there exists no incentive-compatible no-regret algorithm when the reward observations are subject to noise and πœƒ βˆ— unknown. For the interested reader, we here consider the fully deterministic case where πœƒ βˆ— is known a priori and reward observations are directly observable, i.e., subject to no noise so that πœ‚ 𝑑 ≑ 0 . We can design the following provably incentive-compatible no-regret algorithm. In fact, we show that this mechanism is strategyproof, i.e., incentive-compatible in weakly dominant strategies.

1 initialize: 𝐴 1

[ 𝐾 ] 2 for  𝑑 < 𝑇 βˆ’ ( 𝐾 + 1 )  do 3       4      Play 𝑖 𝑑 ∈ argmax 𝑖 ∈ 𝐴 𝑑 ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 ⟩ 5       6      Observe reward π‘Ÿ 𝑑 , 𝑖 𝑑 βˆ— := ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— ⟩  (i.e., rewards of chosen arms are directly observable) 7       if  ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 ⟩ β‰  π‘Ÿ 𝑑 , 𝑖 𝑑 βˆ—  then 8             Eliminate arm 𝑖 𝑑 : 𝐴 𝑑 + 1 ← 𝐴 𝑑 βˆ– { 𝑖 𝑑 } . 9for  𝑑 β‰₯ 𝑇 βˆ’ ( 𝐾 + 1 )  do 10       Play 𝑖 𝑑 ∼ Uniform ⁒ ( 𝐴 𝑑 ) 11       Observe reward π‘Ÿ 𝑑 , 𝑖 𝑑 βˆ— := ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— ⟩ 12       if  ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 ⟩ β‰  π‘Ÿ 𝑑 , 𝑖 𝑑 βˆ—  then 13             Eliminate arm 𝑖 𝑑 : 𝐴 𝑑 + 1 ← 𝐴 𝑑 βˆ– { 𝑖 𝑑 } . Mechanism 4 Incentive-Compatible No-Regret Algorithm in the Fully Deterministic Case Lemma A.1.

Mechanism 3 is strategyproof, i.e., being truthful is a weakly dominant strategy for every arm. Moreover, Mechanism 3 suffers at most 𝐾 + 1 strategic regret in every Nash equilibrium of the arms.5

Proof.

Incentive-Compatibility in Weakly Dominant Strategies. It is easy to see that for the last 𝐾 + 1 rounds, reporting truthfully, i.e., reporting π‘₯ 𝑑 , 𝑖 βˆ— , is a weakly dominant strategy, since the the set of active arms is played uniformly and nothing can be gained from misreporting (an arm can only miss out on being selected by misreporting in the last 𝐾 + 1 steps). Hence, conditional on any history, reporting truthfully is the best continuation for any arm. In particular, when an arm plays truthfully in these rounds the obtained utility in the last 𝐾 + 1 steps is at least 𝐾 + 1 𝐾 , since | 𝐴 𝑑 | ≀ 𝐾 .

Now, for the time steps 𝑑 < 𝑇 βˆ’ ( 𝐾 + 1 ) note that any untruthful strategy can obtain at most one more allocation than the truthful strategy, because if 𝑖 𝑑

𝑖 and ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 ⟩

⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 βˆ— ⟩ , then arm 𝑖 is eliminated immediately. Hence, at most utility 1 can be gained from receiving an allocation by misreporting. However, in this case the arm gets eliminated and receives utility 0 in the last 𝐾 + 1 rounds. As seen before the minimum utility the truthful strategy receives in the last 𝐾 + 1 receives is 𝐾 + 1 𝐾

1 . Consequently, irrespective of the other arms strategies, the truthful strategy is (weakly) optimal for arm 𝑖 .

One may wonder why the truthful strategy is not strictly dominant. To see this note that reporting any π‘₯ 𝑑 , 𝑖 β‰  π‘₯ 𝑑 , 𝑖 βˆ— such that the difference π‘₯ 𝑑 , 𝑖 βˆ’ π‘₯ 𝑑 , 𝑖 βˆ— is orthogonal to πœƒ βˆ— , i.e., ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 βˆ’ π‘₯ 𝑑 , 𝑖 βˆ— ⟩

0 , does not cause elimination and is equivalent under Mechanism 3. In other words, such untruthful strategies, which however have no effect on the selection, are equally good.

Regret. The regret in the last 𝐾 + 1 rounds is trivially bounded by 𝐾 + 1 . When showing that the algorithm is strategyproof we showed that any untruthful strategy such that there exists 𝑖 𝑑

𝑖 with ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 ⟩ > ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 βˆ— ⟩ is worse than the truthful strategy independently from what the other arms are playing. Hence, in any Nash equilibrium arm 𝑖 chooses strategies such that if 𝑖 𝑑

𝑖 then ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 ⟩

⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 βˆ— ⟩ . In other words, since the selection is greedy, Mechanism 3 selected the best arm in the given round. Mechanism 3 therefore suffers zero regret in the first 𝑇 βˆ’ ( 𝐾 + 1 ) regret in any Nash equilibrium of the arms. ∎

As discussed in Section 3, we conjecture that there exists no incentive-compatible no-regret algorithm for the strategic linear contextual bandits when the reward observations are subject to noise. The intuition for this conjecture is as follows. Suppose there exists a learning algorithm 𝑀 that is incentive-compatible and no-regret, that is, the strategy profile where every arm is always truthful is a NE. Since 𝑀 is also no-regret, the selection policy of 𝑀 must depend on the reported contexts in some way. In particular, in some round 𝑑 in which 𝑀 does not select arm 𝑖 β€”but 𝑀 maps from reported contexts to an action in [ 𝐾 ] β€”there must exist a context π‘₯ ~ 𝑑 that arm 𝑖 could report that increases its probability of being selected.

Suppose arm 𝑖 changes its strategy from 𝜎 𝑖 βˆ— (i.e., being truthful) to the strategy that is always truthful except for round 𝑑 where it reports π‘₯ ~ 𝑑 instead of π‘₯ 𝑑 , 𝑖 βˆ— . The algorithm 𝑀 then observes a reward drawn from a distribution with mean ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 βˆ— ⟩ , but might have expected a reward drawn from a distribution with mean ⟨ πœƒ βˆ— , π‘₯ ~ 𝑑 ⟩ . We believe that the difference in observed and expected reward is statistically insignificant when arm 𝑖 only misreports a single or constant number of times. However, due to the intricate relationship between the learning algorithm and the induced NE strategies for the 𝐾 arms, providing a rigorous argument for this is challenging.

Appendix BProof of Proposition 3.3 Proof of Proposition 3.3.

We begin with the incentive-unaware greedy algorithm that in round 𝑑 pulls arm 𝑖 𝑑

argmax 𝑖 ∈ [ 𝐾 ] ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 ⟩ . Let π‘₯ ~ . .

argmax βˆ₯ π‘₯ βˆ₯ ≀ 1 ⟨ πœƒ βˆ— , π‘₯ ⟩ and w.l.o.g. we assume that π‘₯ ~ is unique. We show that the strategy profile, where every arm always reports π‘₯ ~ is the only Nash equilibrium under the incentive-unaware greedy algorithm. Let 𝝈 be any strategy profile which is such that there exists a round 𝑑 and arm 𝑖 such that π‘₯ 𝑑 , 𝑖 β‰  π‘₯ ~ . We distinguish between two cases.

Case 1:

There exists a round 𝑑 and arm 𝑖 such that ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 ⟩ < max 𝑗 ∈ [ 𝐾 ] ⁑ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑗 ⟩ .

Note that this implies that arm 𝑖 is not selected by the learner. However, by reporting π‘₯ ~ instead of π‘₯ 𝑑 , 𝑖 , arm 𝑖 is guaranteed to be selected with probability at least 1 / 𝐾 . Hence, reporting π‘₯ 𝑑 , 𝑖 is strictly worse than reporting π‘₯ ~ so that 𝝈 cannot be a NE.

Case 2:

⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 ⟩

max 𝑗 ∈ [ 𝐾 ] ⁑ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑗 ⟩ for all rounds 𝑑 and arms 𝑖 .

Note that this implies that each arm 𝑖 is selected with probability 1 / 𝐾 every round.6 Suppose that for any of these rounds 𝑑 we have max 𝑗 ∈ [ 𝐾 ] ⁑ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑗 ⟩ < ⟨ πœƒ βˆ— , π‘₯ ~ ⟩ . Then, by reporting π‘₯ ~ instead of π‘₯ 𝑑 , 𝑖 arm 𝑖 could ensure to be selected with probability one. Hence, the strategy where arm 𝑖 in round 𝑑 reports π‘₯ ~ instead of π‘₯ 𝑑 , 𝑖 is a strictly better response. Therefore, 𝝈 cannot be a NE. The other case is when max 𝑗 ∈ [ 𝐾 ] ⁑ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑗 ⟩

⟨ πœƒ βˆ— , π‘₯ ~ ⟩ , but this cannot be because 𝝈 is supposed to be different to always reporting π‘₯ ~ .

Consequently, the strategy profile where every arm always reports π‘₯ ~ is the only NE under the incentive-unaware greedy algorithm. Under this strategy profile, the incentive-unaware greedy algorithm will play uniformly and therefore suffer linear regret.

Failures of Algorithms for Non-Strategic Linear Contextual Bandits.

It is not really surprising that algorithms for non-strategic linear contextual bandits fail in the strategic linear contextual bandits, since such algorithms implicitly incentivize the arms to β€œcompete” in every round by misreporting their context to the best possible one. Nothing prevents the arms to not myopically optimize their probability of being selected every round. As an example of a standard algorithm for non-strategic linear contextual bandits we consider LinUCB that in the non-strategic problem setup enjoys a regret guarantee of π’ͺ ~ ⁒ ( 𝑑 ⁒ 𝑇 ) .

The reasons for LinUCB’s failure in this strategic problem are the same as for the incentive-unaware greedy algorithm from before. It will be the strictly dominant strategy for the arms to maximize their selection probability in the given round by misreporting their context. Recall that LinUCB maintains a least-squares estimator given by

πœƒ ^ 𝑑

argmin πœƒ ∈ ℝ 𝑑 ⁒ βˆ‘ β„“

1 𝑑 βˆ’ 1 ( ⟨ πœƒ , π‘₯ β„“ , 𝑖 β„“ ⟩ βˆ’ π‘Ÿ β„“ , 𝑖 β„“ ) 2 + πœ† ⁒ βˆ₯ πœƒ βˆ₯ 2 2

and in round 𝑑 selects arm (ties broken uniformly at random)

𝑖 𝑑

argmax 𝑖 ∈ [ 𝐾 ] ⟨ πœƒ ^ 𝑑 , π‘₯ 𝑑 , 𝑖 ⟩ + 𝛽 𝑑 ⁒ βˆ₯ π‘₯ 𝑑 , 𝑖 βˆ₯ 𝑉 𝑑 βˆ’ 1 ,

where 𝛽 𝑑 β‰ˆ 𝑑 ⁒ log ⁑ ( 𝑇 ) and 𝑉 𝑑

πœ† ⁒ 𝐼 + βˆ‘ 𝑖

1 𝑑 βˆ’ 1 π‘₯ β„“ , 𝑖 β„“ ⁒ π‘₯ β„“ , 𝑖 ⊀ . Let UCB 𝑑 ( π‘₯ ) . .

⟨ πœƒ ^ 𝑑 , π‘₯ ⟩ + 𝛽 𝑑 βˆ₯ π‘₯ βˆ₯ 𝑉 𝑑 βˆ’ 1 .

The argument for LinUCB will be the same as for the incentive-unaware greedy algorithm. Let π‘₯ ~ 𝑑 . .

argmax βˆ₯ π‘₯ βˆ₯ 2 ≀ 1 UCB 𝑑 ( π‘₯ ) and w.l.o.g. assume that π‘₯ ~ 𝑑 is unique.

Importantly, in what follows, keep in mind that it will not matter how the reports of arm 𝑖 influenced πœƒ ~ 𝑑 or 𝑉 𝑑 in previous rounds. Once again, suppose 𝝈 is a strategy profile such that there exists a round 𝑑 and arm 𝑖 such that conditioned on the history π‘₯ 𝑑 , 𝑖 β‰  π‘₯ ~ 𝑑 . Once again we distinguish between the following two cases:

Case 1:

There exists a round 𝑑 and arm 𝑖 such that UCB 𝑑 ⁒ ( π‘₯ 𝑑 , 𝑖 ) < max 𝑗 ∈ [ 𝐾 ] ⁑ UCB 𝑑 ⁒ ( π‘₯ 𝑑 , 𝑗 ) .

This implies that arm 𝑖 was not selected by the learner in round 𝑑 . However, by reporting π‘₯ ~ 𝑑 and in all future rounds report π‘₯ ~ β„“ for β„“

𝑑 , arm 𝑖 can guarantee to be selected with probability at least 1 / 𝐾 in round 𝑑 and at least as many selections as under 𝜎 𝑖 . Hence, 𝜎 𝑖 cannot be a best response to 𝜎 βˆ’ 𝑖 .

Case 2:

UCB 𝑑 ⁒ ( π‘₯ 𝑑 , 𝑖 )

max 𝑗 ∈ [ 𝐾 ] ⁑ UCB 𝑑 ⁒ ( π‘₯ 𝑑 , 𝑗 ) for all rounds 𝑑 and arms 𝑖 .

Note that this implies that arm 𝑖 is selected with probability 1 / 𝐾 every round. Suppose that for any round 𝑑 it is the case that max 𝑗 ∈ [ 𝐾 ] ⁑ UCB 𝑑 ⁒ ( π‘₯ 𝑑 , 𝑗 ) < UCB 𝑑 ⁒ ( π‘₯ ~ ) . Then, by choosing strategy π‘₯ ~ 𝑑 in round 𝑑 and π‘₯ ~ β„“ adaptively for all future rounds β„“

𝑑 , arm 𝑖 obtains more selections than when reporting π‘₯ 𝑑 , 𝑖 . Hence, 𝝈 cannot be a NE.

As a result, the strategy profile where all arms report π‘₯ ~ 𝑑 in round 𝑑 is the only NE and LinUCB suffers linear regret, as it pulls arms uniformly at random. In exactly the same way, we can also show that the algorithms for linear contextual bandits with adversarial context corruptions in [8] suffer linear regret. ∎

Appendix CAssumption 1 and Remark 4.3 Example 1.

We here give a simple example where a strategic arm can simulate a situation where it is always optimal even though it is only optimal half of the time.

Let πœƒ βˆ—

1 and consider the following problem instance with two arms 1 and 2 , where

π‘₯ 𝑑 , 1 βˆ—

{ 0 ,   𝑑  is even

1 ,   𝑑  is odd
 and  π‘₯ 𝑑 , 2 βˆ—

1 / 4 .

Now, suppose that arm 1 always reports π‘₯ 𝑑 , 1

1 / 2 and arm 2 reports truthfully (or approximately so). Then, arm 1 appears optimal every round 𝑑 . In particular, on average arm when we pull arm 1 it has reward 1 / 2 , which is consistent with its report of π‘₯ 𝑑 , 1

1 / 2 .

Now, consider a second problem instance, where

π‘₯ 𝑑 , 1 βˆ—

1 / 2  and  π‘₯ 𝑑 , 2 βˆ—

1 / 4 .

Recall that we assume that, like almost always in the literature, the noise is sub-Gaussian. As an example, let’s consider Bernoulli-noise such that β„™ ⁒ ( π‘Ÿ 𝑑 , 𝑖

1 )

π‘₯ 𝑑 , 𝑖 βˆ—

1 βˆ’ β„™ ⁒ ( π‘Ÿ 𝑑 , 𝑖

0 ) . Then, the first environment when arm 1 manipulates as suggested is identical to the second environment when the arms are truthful. In the second environment, to suffer sublinear regret we must select arm 1 order 𝑇 many times. However, in the first environment, we must select arm 1 only order π‘œ ⁒ ( 𝑇 ) many times.

Discussion.

Based on these observations, we expect that we would have to make additional (strong) assumptions about the distribution of the noise and the prior knowledge of the learner in order to drop Assumption 1. As an example, let’s assume standard normal noise 𝒩 ⁒ ( 0 , 1 ) and that the learner knows that the variance is always 1 a priori. Then, one potentially effective approach would be to extend our current grim trigger to additionally threaten arms that misreport their variance. Of course, the variance is unknown, however, we could estimate the variance of each arm’s reports separately and use confidence intervals around the estimated variance. We could then threaten an arm with elimination if the arm’s estimated variance falls out of the confidence interval. However, we expect there to be several technical subtleties in analyzing such variance-aware mechanisms.

Appendix DProof of Theorem 4.1 and Theorem 4.2 D.1Preliminaries

We begin with some preliminaries that we use in the proofs of both Theorem 4.1 and Theorem 4.2. Note that 𝔼 ⁒ [ π‘Ÿ 𝑑 , 𝑖 ]

⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 βˆ— ⟩ and we denote the total true mean reward by

π‘Ÿ ^ 𝑑 , 𝑖 βˆ— . .

βˆ‘ β„“ ≀ 𝑑 : 𝑖 β„“

𝑖 ⟨ πœƒ βˆ— , π‘₯ β„“ , 𝑖 βˆ— ⟩ .

Let us also recall the definition of the total observed reward as π‘Ÿ ^ 𝑑 , 𝑖 . .

βˆ‘ β„“ ≀ 𝑑 : 𝑖 β„“

𝑖 π‘Ÿ β„“ , 𝑖 and recall its upper confidence bound UCB 𝑑 ( π‘Ÿ ^ 𝑑 , 𝑖 ) . .

π‘Ÿ ^ 𝑑 , 𝑖 + 2 𝑛 𝑑 ⁒ ( 𝑖 ) ⁒ log ⁑ ( 𝑇 ) and define the lower confidence bound LCB 𝑑 ( π‘Ÿ ^ 𝑑 , 𝑖 ) . .

π‘Ÿ ^ 𝑑 , 𝑖 βˆ’ 2 𝑛 𝑑 ⁒ ( 𝑖 ) ⁒ log ⁑ ( 𝑇 ) .

We now analyze the basic properties of the grim trigger condition of GGTM, which eliminates arm 𝑖 in round 𝑑 if

βˆ‘ β„“ ≀ 𝑑 : 𝑖 β„“

𝑖 ⟨ πœƒ βˆ— , π‘₯ β„“ , 𝑖 ⟩

UCB 𝑑 ⁒ ( π‘Ÿ ^ 𝑑 , 𝑖 ) ,

or equivalently

βˆ‘ β„“ ≀ 𝑑 : 𝑖 β„“

𝑖 ( ⟨ πœƒ βˆ— , π‘₯ β„“ , 𝑖 ⟩ βˆ’ π‘Ÿ β„“ , 𝑖 )

2 ⁒ 𝑛 𝑑 ⁒ ( 𝑖 ) ⁒ log ⁑ ( 𝑇 ) .

Define the good event 𝒒 as the event that

𝒒 . .

{ LCB 𝑑 ( π‘Ÿ ^ 𝑑 , 𝑖 ) ≀ π‘Ÿ ^ 𝑑 , 𝑖 βˆ— ≀ UCB 𝑑 ( π‘Ÿ ^ 𝑑 , 𝑖 ) βˆ€ 𝑑 ∈ [ 𝑇 ] , 𝑖 ∈ [ 𝐾 ] } .

By Hoeffding’s inequality, we know that the good event occurs with probability at least β„™ ⁒ ( 𝒒 ) β‰₯ 1 βˆ’ 1 𝑇 2 . Next, let

𝜏 𝑖 := min ⁑ { 𝑑 ∈ [ 𝑇 ] : 𝑖 βˆ‰ 𝐴 𝑑 }

denote the first round in which 𝑖 is no longer active and, by convention, let 𝜏 𝑖

𝑇 if 𝑖 ∈ 𝐴 𝑇 . By design of the grim trigger condition, note that 𝜏 𝑖

𝑇 for all 𝑖 ∈ [ 𝐾 ] on the good event 𝒒 if all arms always report truthfully.

We now provide a general result bounding the maximal amount of manipulation any arm can exercise before being eliminated by GGTM.

Lemma D.1.

On the good event 𝒒 , for any round 𝑑 ∈ [ 𝑇 ] and any arm 𝑖 ∈ 𝐴 𝑑 it holds that

βˆ‘ β„“ ≀ 𝑑 : 𝑖 β„“

𝑖 ( ⟨ πœƒ βˆ— , π‘₯ β„“ , 𝑖 ⟩ βˆ’ π‘₯ β„“ , 𝑖 βˆ— ) ≀ 4 ⁒ 𝑛 𝑑 ⁒ ( 𝑖 ) ⁒ log ⁑ ( 𝑇 ) .

From the definition of 𝜏 𝑖 this entails that

βˆ‘ β„“ ≀ 𝜏 𝑖 : 𝑖 β„“

𝑖 ( ⟨ πœƒ βˆ— , π‘₯ β„“ , 𝑖 ⟩ βˆ’ π‘₯ β„“ , 𝑖 βˆ— ) ≀ 4 ⁒ 𝑛 𝜏 𝑖 ⁒ ( 𝑖 ) ⁒ log ⁑ ( 𝑇 ) .

Proof.

On the good event 𝒒 , it holds that

βˆ‘ β„“ ≀ 𝑑 : 𝑖 β„“

𝑖 ( ⟨ πœƒ βˆ— , π‘₯ β„“ , 𝑖 βˆ— ⟩ βˆ’ π‘Ÿ β„“ , 𝑖 ) ∈ [ βˆ’ 2 ⁒ 𝑛 𝑑 ⁒ ( 𝑖 ) ⁒ log ⁑ ( 𝑇 ) , + 2 ⁒ 𝑛 𝑑 ⁒ ( 𝑖 ) ⁒ log ⁑ ( 𝑇 ) ] ,

which implies that

βˆ‘ β„“ ≀ 𝑑 : 𝑖 β„“

𝑖 ( ⟨ πœƒ βˆ— , π‘₯ β„“ , 𝑖 ⟩ βˆ’ π‘Ÿ β„“ , 𝑖 ) β‰₯ βˆ‘ β„“ ≀ 𝑑 : 𝑖 β„“

𝑖 ⟨ πœƒ βˆ— , π‘₯ β„“ , 𝑖 βˆ’ π‘₯ β„“ , 𝑖 βˆ— ⟩ βˆ’ 2 ⁒ 𝑛 𝑑 ⁒ ( 𝑖 ) ⁒ log ⁑ ( 𝑇 ) .

Hence, if βˆ‘ β„“ ≀ 𝑑 : 𝑖 β„“

𝑖 ⟨ πœƒ βˆ— , π‘₯ β„“ , 𝑖 βˆ’ π‘₯ β„“ , 𝑖 βˆ— ⟩

4 ⁒ 𝑛 𝑑 ⁒ ( 𝑖 ) ⁒ log ⁑ ( 𝑇 ) , then

βˆ‘ β„“ ≀ 𝑑 : 𝑖 β„“

𝑖 ( ⟨ πœƒ βˆ— , π‘₯ β„“ , 𝑖 ⟩ βˆ’ π‘Ÿ β„“ , 𝑖 )

2 ⁒ 𝑛 𝑑 ⁒ ( 𝑖 ) ⁒ log ⁑ ( 𝑇 ) ,

which means that arm 𝑖 is eliminated from 𝐴 𝑑 . Finally, 𝜏 𝑖 is defined as the first round such that 𝑖 βˆ‰ 𝐴 𝑑 so that

βˆ‘ β„“ ≀ 𝑑 : 𝑖 β„“

𝑖 ( ⟨ πœƒ βˆ— , π‘₯ β„“ , 𝑖 ⟩ βˆ’ π‘₯ β„“ , 𝑖 βˆ— ) ≀ 4 ⁒ 𝑛 𝑑 ⁒ ( 𝑖 ) ⁒ log ⁑ ( 𝑇 )

for all 𝑑 ≀ 𝜏 𝑖 and βˆ‘ β„“ ≀ 𝜏 𝑖 + 1 : 𝑖 β„“

𝑖 ( ⟨ πœƒ βˆ— , π‘₯ β„“ , 𝑖 ⟩ βˆ’ π‘₯ β„“ , 𝑖 βˆ— )

4 ⁒ 𝑛 𝜏 𝑖 ⁒ ( 𝑖 ) ⁒ log ⁑ ( 𝑇 ) . ∎

For completeness, we also formally state the fact that on the good event 𝒒 any truthful arm is not eliminated with high probability.

Lemma D.2.

If arm 𝑖 reports truthfully every round, i.e., plays strategy 𝜎 𝑖 βˆ— with π‘₯ 𝑑 , 𝑖

π‘₯ 𝑑 , 𝑖 βˆ— for all round 𝑑 ∈ [ 𝑇 ] , then on the good event 𝒒 arm 𝑖 stays active for all rounds.

Proof.

When arm 𝑖 is truthful, then βˆ‘ β„“ ≀ 𝑑 : 𝑖 β„“

𝑖 ⟨ πœƒ βˆ— , π‘₯ β„“ , 𝑖 ⟩

π‘Ÿ ^ 𝑑 , 𝑖 βˆ— . On the good event, π‘Ÿ ^ 𝑑 , 𝑖 βˆ— ≀ UCB 𝑑 ⁒ ( π‘Ÿ ^ 𝑑 , 𝑖 ) for all 𝑑 ∈ [ 𝑇 ] . Hence, the grim trigger condition is never satisfied and arm 𝑖 remains active throughout all 𝑇 rounds. ∎

D.2Proof of Theorem 4.1 Proof of Theorem 4.1.

We have to show that the strategy profile 𝝈 βˆ— , where every arm always truthfully reports their context, i.e., π‘₯ 𝑑 , 𝑖

π‘₯ 𝑑 , 𝑖 βˆ— for all ( 𝑑 , 𝑖 ) ∈ [ 𝑇 ] Γ— [ 𝐾 ] , forms a π’ͺ ~ ⁒ ( 𝑇 ) -Nash equilibrium for the arms under GGTM. We do this by showing that any deviating strategy 𝜎 𝑖 for arm 𝑖 cannot gain more than this 𝑇 clicks. Recall that 𝑖 𝑑 βˆ— is the optimal arm in round 𝑑 and 𝑖 𝑑 the arm the learner selects.

We begin by deriving the minimum utility of every arm when everyone is truthful. To this end, let 𝑛 𝑇 βˆ— ( 𝑖 ) . .

βˆ‘ 𝑑

1 𝑇 πŸ™ { 𝑖 𝑑 βˆ—

𝑖 } be the number of times arm 𝑖 is the optimal arm. If every arm 𝑖 is truthful, then on the good event 𝒒 no arm gets eliminated (Lemma D.2) and ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 ⟩

⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 βˆ— ⟩ for all ( 𝑑 , 𝑖 ) ∈ [ 𝑇 ] Γ— [ 𝐾 ] . As a result, GGTM pulls the optimal arm 𝑖 𝑑 βˆ— in every round 𝑑 . First, note that:

𝔼 𝝈 βˆ— ⁒ [ 𝑛 𝑇 ⁒ ( 𝑖 ) ] β‰₯ 𝑛 𝑇 βˆ— ⁒ ( 𝑖 ) βˆ’ 1 𝑇 ,

because on the good event 𝒒 (when everyone is truthful), we have 𝑛 𝑇 ⁒ ( 𝑖 ) β‰₯ 𝑛 𝑇 βˆ— ⁒ ( 𝑖 ) . Since by construction β„™ ⁒ ( 𝒒 ) β‰₯ 1 βˆ’ 1 / 𝑇 2 , the lower bound follows.

Next, we bound the utility of a deviating strategy 𝜎 𝑖 in response to GGTM and the other arms’ truthful strategies 𝜎 βˆ’ 𝑖 βˆ— . On the good event 𝒒 , when the arms play strategies ( 𝜎 𝑖 , 𝜎 βˆ’ 𝑖 βˆ— ) , we have

𝑛 𝑇 ⁒ ( 𝑖 )

βˆ‘ 𝑑

1 𝑇 πŸ™ ⁒ { 𝑖 𝑑

𝑖 , 𝑖 𝑑 βˆ—

𝑖 } + βˆ‘ 𝑑

1 𝑇 πŸ™ ⁒ { 𝑖 𝑑

𝑖 , 𝑖 𝑑 βˆ— β‰  𝑖 }

≀ βˆ‘ 𝑑

1 𝑇 πŸ™ ⁒ { 𝑖 𝑑 βˆ—

𝑖 } + βˆ‘ 𝑑

1 𝑇 πŸ™ ⁒ { 𝑖 𝑑

𝑖 , 𝑖 𝑑 βˆ— β‰  𝑖 }

𝑛 𝑇 βˆ— ⁒ ( 𝑖 ) + βˆ‘ 𝑑

1 𝑇 πŸ™ ⁒ { 𝑖 𝑑

𝑖 , 𝑖 𝑑 βˆ— β‰  𝑖 } .

We will now bound the sum on the right hand side from above.

Every arm 𝑗 β‰  𝑖 is truthful and therefore, on the good event, 𝑗 ∈ 𝐴 𝑑 for all 𝑑 . If the optimal arm is not 𝑖 , i.e., 𝑖 𝑑 βˆ— β‰  𝑖 , it means that ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— βˆ— βˆ’ π‘₯ 𝑑 , 𝑖 βˆ— ⟩ > 0 . Next, since GGTM selects the arms greedily according to the reported reward, the event 𝑖 𝑑

𝑖 implies that

⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 ⟩ β‰₯ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— βˆ— ⟩ ,

where we used that any arm 𝑖 𝑑 βˆ— β‰  𝑖 is truthful so that ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— ⟩

⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— βˆ— ⟩ . As a result, we can apply Lemma D.1 to obtain

βˆ‘ 𝑑

1 𝑇 πŸ™ ⁒ { 𝑖 𝑑 βˆ— β‰  𝑖 , 𝑖 𝑑

𝑖 } ⁒ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— βˆ— βˆ’ π‘₯ 𝑑 , 𝑖 βˆ— ⟩
≀ βˆ‘ 𝑑

1 𝑇 πŸ™ ⁒ { 𝑖 𝑑

𝑖 , 𝑖 𝑑 βˆ— β‰  𝑖 } ⁒ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 βˆ’ π‘₯ 𝑑 , 𝑖 βˆ— ⟩

≀ βˆ‘ 𝑑

1 𝜏 𝑖 πŸ™ ⁒ { 𝑖 𝑑

𝑖 } ⁒ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 βˆ’ π‘₯ 𝑑 , 𝑖 βˆ— ⟩

≀ 4 ⁒ 𝑛 𝜏 𝑖 ⁒ ( 𝑖 ) ⁒ log ⁑ ( 𝑇 ) .

Since for 𝑖 𝑑 βˆ— β‰  𝑖 the gap ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— βˆ— βˆ’ π‘₯ 𝑑 , 𝑖 βˆ— ⟩ is positive and assumed to be constant, we get that βˆ‘ 𝑑

1 𝑇 πŸ™ ⁒ { 𝑖 𝑑 βˆ— β‰  𝑖 , 𝑖 𝑑

𝑖 } ≀ π’ͺ ⁒ ( 𝑛 𝜏 𝑖 ⁒ ( 𝑖 ) ⁒ log ⁑ ( 𝑇 ) ) . We coarsely upper bound 𝑛 𝜏 𝑖 ⁒ ( 𝑖 ) by 𝑇 and using that the good event 𝒒 has probability at least 1 βˆ’ 1 / 𝑇 2 , we obtain

𝔼 𝜎 𝑖 , 𝜎 βˆ’ 𝑖 βˆ— ⁒ [ 𝑛 𝑇 ⁒ ( 𝑖 ) ] ≀ 𝑛 𝑇 βˆ— ⁒ ( 𝑖 ) + π’ͺ ⁒ ( 𝑇 ⁒ log ⁑ ( 𝑇 ) ) .

We have thus shown that

𝔼 𝝈 βˆ— ⁒ [ 𝑛 𝑇 ⁒ ( 𝑖 ) ] β‰₯ 𝔼 𝜎 𝑖 , 𝜎 βˆ’ 𝑖 βˆ— ⁒ [ 𝑛 𝑇 ⁒ ( 𝑖 ) ] + π’ͺ ⁒ ( 𝑇 ⁒ log ⁑ ( 𝑇 ) )

for any deviating (dishonest) strategy 𝜎 𝑖 . This means that 𝝈 βˆ— is a π’ͺ ~ ⁒ ( 𝑇 ) -Nash equilibrium for the arms.

Finally, the regret of GGTM when the arms are truthful is quickly bounded by 1 / 𝑇 by using the fact that on the good event no arm gets eliminated and, therefore, GGTM picks the round-optimal arm every round. The event that 𝒒 does not hold has probability at most 1 / 𝑇 2 which implies expected regret 1 / 𝑇 , i.e., 𝑅 𝑇 ⁒ ( GGTM , 𝝈 βˆ— ) ≀ 1 / 𝑇 .

∎

D.3Proof of Theorem 4.2 Proof of Theorem 4.2.

The proof of Theorem 4.2 is notably more involved than that of Theorem 4.1, even though the general proof idea remains similar.

We begin by decomposing of GGTM into the rounds where the optimal arm is active and the rounds in which it is being ignored. To this end, recall the definition of the arm that is optimal in round 𝑑 as 𝑖 𝑑 βˆ— . .

argmax 𝑖 ∈ [ 𝐾 ] ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 βˆ— ⟩ . We have

𝑅 𝑇

𝔼 ⁒ [ βˆ‘ 𝑑

1 𝑇 πŸ™ ⁒ { 𝑖 𝑑 βˆ— ∈ 𝐴 𝑑 } ⁒ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— βˆ— βˆ’ π‘₯ 𝑑 , 𝑖 𝑑 βˆ— ⟩ ] ⏟ 𝐼 1 + 𝔼 ⁒ [ βˆ‘ 𝑑

1 𝑇 πŸ™ ⁒ { 𝑖 𝑑 βˆ— βˆ‰ 𝐴 𝑑 } ⁒ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— βˆ— βˆ’ π‘₯ 𝑑 , 𝑖 𝑑 βˆ— ⟩ ] ⏟ 𝐼 2 .

We now bound 𝐼 1 and 𝐼 2 separately as follows.

Lemma D.3 (Bounding 𝐼 1 ).
𝔼 ⁒ [ βˆ‘ 𝑑

1 𝑇 πŸ™ ⁒ { 𝑖 𝑑 βˆ— ∈ 𝐴 𝑑 } ⁒ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— βˆ— βˆ’ π‘₯ 𝑑 , 𝑖 𝑑 βˆ— ⟩ ] ≀ π’ͺ ⁒ ( 𝐾 ⁒ 𝑇 ⁒ log ⁑ ( 𝑇 ) ) .

Proof.

Let 𝑖 𝑑 denote the selection of GGTM in round 𝑑 . Recall that GGTM greedily selects the arm in 𝐴 𝑑 with highest reported value in round 𝑑 , that is, ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 ⟩

max 𝑖 ∈ 𝐴 𝑑 ⁑ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 ⟩ . Consequently, on event { 𝑖 𝑑 βˆ— ∈ 𝐴 𝑑 } , we have

⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— βˆ— ⟩ ≀ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— ⟩ ≀ max 𝑖 ∈ 𝐴 𝑑 ⁑ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 ⟩

⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 ⟩ ,

where the first inequality holds by the assumption the optimal arm 𝑖 𝑑 βˆ— reports their value at least as high as their true value. As a consequence, it holds that ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— βˆ— βˆ’ π‘₯ 𝑑 , 𝑖 𝑑 βˆ— ⟩ ≀ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ’ π‘₯ 𝑑 , 𝑖 𝑑 βˆ— ⟩ which implies:

𝔼 ⁒ [ βˆ‘ 𝑑

1 𝑇 πŸ™ ⁒ { 𝑖 𝑑 βˆ— ∈ 𝐴 𝑑 } ⁒ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— βˆ— βˆ’ π‘₯ 𝑑 , 𝑖 𝑑 βˆ— ⟩ ] ≀ 𝔼 ⁒ [ βˆ‘ 𝑑

1 𝑇 πŸ™ ⁒ { 𝑖 𝑑 βˆ— ∈ 𝐴 𝑑 } ⁒ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ’ π‘₯ 𝑑 , 𝑖 𝑑 βˆ— ⟩ ] .

(5)

When 𝑖 𝑑 βˆ— ∈ 𝐴 𝑑 , a necessary condition for arm 𝑖 to be selected in round 𝑑 (i.e., 𝑖 𝑑

𝑖 ) is that 𝑑 ≀ 𝜏 𝑖 . Finally, we split the sum into each arm’s contribution and apply Lemma D.1 to obtain

( ⁒ 5 ⁒ ) ≀ 𝔼 ⁒ [ βˆ‘ 𝑖

1 𝐾 βˆ‘ 𝑑

1 𝜏 𝑖 πŸ™ ⁒ { 𝑖 𝑑

𝑖 } ⁒ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ’ π‘₯ 𝑑 , 𝑖 𝑑 βˆ— ⟩ ] ≀ 𝔼 ⁒ [ βˆ‘ 𝑖

1 𝐾 4 ⁒ 𝑛 𝜏 𝑖 ⁒ ( 𝑖 ) ⁒ log ⁑ ( 𝑇 ) ] ≀ 4 ⁒ 𝐾 ⁒ 𝑇 ⁒ log ⁑ ( 𝑇 ) ,

where the last step follows from Jensen’s inequality by bounding 𝑛 𝜏 𝑖 ⁒ ( 𝑖 ) by 𝑛 𝑇 ⁒ ( 𝑖 ) and using that βˆ‘ 𝑖

1 𝐾 𝑛 𝑇 ⁒ ( 𝑖 ) ≀ 𝑇 by definition of 𝑛 𝑇 ⁒ ( 𝑖 ) . ∎

While bounding 𝐼 1 is fairly straightforward and we did not have to rely on the fact that the arms respond in Nash equilibrium, bounding 𝐼 2 becomes more challenging as we must argue that it is in each arms’ interest to maintain active for a sufficiently long time.

Lemma D.4 (Bounding 𝐼 2 ).
𝔼 ⁒ [ βˆ‘ 𝑑

1 𝑇 πŸ™ ⁒ { 𝑖 𝑑 βˆ— βˆ‰ 𝐴 𝑑 } ⁒ ( πœ‡ 𝑑 , 𝑖 𝑑 βˆ— βˆ— βˆ’ πœ‡ 𝑑 , 𝑖 𝑑 βˆ— ) ] ≀ 5 ⁒ 𝐾 2 ⁒ 𝐾 ⁒ 𝑇 ⁒ log ⁑ ( 𝑇 )

(6) Proof.

To bound 𝐼 2 we argue via the best response property of the Nash equilibrium. This requires some intermediate steps. We begin with a lower bound on the expected number of selections any arm must receive when the arms act according to a Nash equilibrium under GGTM.

Recall the definition 𝑛 𝜏 βˆ— ( 𝑖 ) . .

βˆ‘ 𝑑

1 𝜏 πŸ™ { 𝑖 𝑑 βˆ—

𝑖 } and that the indicator variables πŸ™ ⁒ { 𝑖 𝑑 βˆ—

𝑖 } are not random, since we work under an adversarially chosen sequence of true contexts. In contrast, the indicator πŸ™ ⁒ { 𝑖 𝑑

𝑖 } is a random variable as it generally depends on the random reward observations and any randomization of the algorithm.

The following lemma provides a lower bound on the number of allocations any arm must receive in equilibrium. To prove the lemma, we show that we are able to protect any truthful arm from losing more than order 𝐾 ⁒ 𝑇 allocations to manipulating arms. This is crucial as it would be impossible to incentivize approximately truthful arm behavior if an arm would lose too many allocations, e.g., order 𝑇 many, by doing so.

A key challenge here is that under two different strategies 𝜎 𝑖 and 𝜎 𝑖 β€² , the set of active arms can be quite different. This is the case since even though we estimate each arm’s expected reward independently, arm 𝑖 can still slightly influence the elimination of some other arm 𝑗 by poaching selections from them. As a result, we must content ourselves with a more conservative bound than one may originally expect.

Lemma D.5.

Let 𝛔 ∈ NE ⁒ ( GGTM ) . Then,

𝔼 𝝈 ⁒ [ 𝑛 𝑇 ⁒ ( 𝑖 ) ] β‰₯ 𝑛 𝑇 βˆ— ⁒ ( 𝑖 ) βˆ’ π’ͺ ⁒ ( 𝐾 ⁒ 𝑇 ⁒ log ⁑ ( 𝑇 ) ) .

In particular, it holds that 𝔼 𝛔 ⁒ [ 𝑛 𝑑 ⁒ ( 𝑖 ) ] β‰₯ 𝑛 𝑑 βˆ— ⁒ ( 𝑖 ) βˆ’ π’ͺ ⁒ ( 𝐾 ⁒ 𝑇 ⁒ log ⁑ ( 𝑇 ) ) for any 𝑑 ∈ [ 𝑇 ] .

Proof.

We use the fact that if 𝝈

( 𝜎 1 , … , 𝜎 𝐾 ) is a NE under GGTM, then 𝜎 𝑖 must be a best response to 𝜎 βˆ’ 𝑖 , i.e., 𝔼 𝜎 𝑖 , 𝜎 βˆ’ 𝑖 ⁒ [ 𝑛 𝑇 ⁒ ( 𝑖 ) ] β‰₯ 𝔼 𝜎 𝑖 β€² , 𝜎 βˆ’ 𝑖 ⁒ [ 𝑛 𝑇 ⁒ ( 𝑖 ) ] for all strategies 𝜎 𝑖 β€² . In particular, it must hold for the truthful strategy 𝜎 𝑖 βˆ— that

𝔼 𝜎 𝑖 , 𝜎 βˆ’ 𝑖 ⁒ [ 𝑛 𝑇 ⁒ ( 𝑖 ) ] β‰₯ 𝔼 𝜎 𝑖 βˆ— , 𝜎 βˆ’ 𝑖 ⁒ [ 𝑛 𝑇 ⁒ ( 𝑖 ) ] .

We focus on the good event 𝒒 so that 𝑖 ∈ 𝐴 𝑑 for all 𝑑 given that arm 𝑖 is truthful. We are interested in the number of rounds such that 𝑖 𝑑 βˆ—

𝑖 and 𝑖 𝑑 β‰  𝑖 . Given strategies ( 𝜎 𝑖 βˆ— , 𝜎 βˆ’ 𝑖 ) so that 𝑖 ∈ 𝐴 𝑇 on the good event, we have

βˆ‘ 𝑑

1 𝑇 πŸ™ ⁒ { 𝑖 𝑑 βˆ—

𝑖 , 𝑖 𝑑 β‰  𝑖 }

βˆ‘ 𝑗 β‰  𝑖 βˆ‘ 𝑑

1 𝜏 𝑗 πŸ™ ⁒ { 𝑖 𝑑 βˆ—

𝑖 , 𝑖 𝑑

𝑗 } .

Note that 𝑖 𝑑

𝑗 with 𝑖 𝑑 βˆ—

𝑖 ∈ 𝐴 𝑑 implies that ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑗 ⟩ β‰₯ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 ⟩ . Moreover, because 𝑖 is truthful and 𝑖 𝑑 βˆ—

𝑖 , we have ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 ⟩

⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— βˆ— ⟩ so that ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑗 ⟩

⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— βˆ— ⟩ . As a result,

⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— βˆ— βˆ’ π‘₯ 𝑑 , 𝑗 βˆ— ⟩ < ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑗 βˆ’ π‘₯ 𝑑 , 𝑗 βˆ— ⟩ .

It then follows from Lemma D.1 that

βˆ‘ 𝑑

1 𝜏 𝑗 πŸ™ ⁒ { 𝑖 𝑑 βˆ—

𝑖 , 𝑖 𝑑

𝑗 } ⁒ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— βˆ— βˆ’ π‘₯ 𝑑 , 𝑗 βˆ— ⟩
< βˆ‘ 𝑑

1 𝜏 𝑗 πŸ™ ⁒ { 𝑖 𝑑 βˆ—

𝑖 , 𝑖 𝑑

𝑗 } ⁒ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑗 βˆ’ π‘₯ 𝑑 , 𝑗 βˆ— ⟩

(7)

≀ βˆ‘ 𝑑

1 𝜏 𝑗 πŸ™ ⁒ { 𝑖 𝑑

𝑗 } ⁒ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑗 βˆ’ π‘₯ 𝑑 , 𝑗 βˆ— ⟩

(8)

≀ 4 ⁒ 𝑛 𝜏 𝑗 ⁒ ( 𝑗 ) ⁒ log ⁑ ( 𝑇 ) .

(9)

Since ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— βˆ— βˆ’ π‘₯ 𝑑 , 𝑗 βˆ— ⟩ is constant for 𝑖 𝑑 βˆ—

𝑖 , 𝑖 𝑑

𝑗 , we obtain

βˆ‘ 𝑗 β‰  𝑖 βˆ‘ 𝑑

1 𝑇 πŸ™ ⁒ { 𝑖 𝑑 βˆ—

𝑖 , 𝑖 𝑑 β‰  𝑖 } ≀ βˆ‘ 𝑗 β‰  𝑖 π’ͺ ⁒ ( 𝑛 𝜏 𝑗 ⁒ ( 𝑗 ) ⁒ log ⁑ ( 𝑇 ) ) ≀ π’ͺ ⁒ ( 𝐾 ⁒ 𝑇 ⁒ log ⁑ ( 𝑇 ) ) ,

where the last inequality follows from Jensen’s inequality. Recalling that β„™ ⁒ ( 𝒒 ) β‰₯ 1 βˆ’ 1 / 𝑇 2 , this provides us with the following lower bound on the utility of the truthful strategy

𝔼 𝜎 𝑖 βˆ— , 𝜎 βˆ’ 𝑖 ⁒ [ 𝑛 𝑇 ⁒ ( 𝑖 ) ]

𝔼 𝜎 𝑖 βˆ— , 𝜎 βˆ’ 𝑖 ⁒ [ βˆ‘ 𝑑

1 𝑇 πŸ™ ⁒ { 𝑖 𝑑

𝑖 } ]

β‰₯ 𝔼 𝜎 𝑖 βˆ— , 𝜎 βˆ’ 𝑖 ⁒ [ βˆ‘ 𝑑

1 𝑇 πŸ™ ⁒ { 𝑖 𝑑 βˆ—

𝑖 } βˆ’ βˆ‘ 𝑑

1 𝑇 πŸ™ ⁒ { 𝑖 𝑑 βˆ—

𝑖 , 𝑖 𝑑 β‰  𝑖 } ]

β‰₯ 𝑛 𝑇 βˆ— ⁒ ( 𝑖 ) βˆ’ π’ͺ ⁒ ( 𝐾 ⁒ 𝑇 ⁒ log ⁑ ( 𝑇 ) )

Note that we, like before, we account for the event 𝒒 𝑐 by increasing the constant factor by one, since ( 1 βˆ’ 1 / 𝑇 2 ) ⁒ 𝑛 𝑇 ⁒ ( 𝑖 ) β‰₯ 𝑛 𝑇 ⁒ ( 𝑖 ) βˆ’ 1 / 𝑇 as 𝑛 𝑇 ⁒ ( 𝑖 ) ≀ 𝑇 .

Since 𝜎 𝑖 has to be a best response to 𝜎 βˆ’ 𝑖 , it must be as least as good as 𝜎 𝑖 βˆ— sot that

𝔼 𝝈 ⁒ [ 𝑛 𝑇 ⁒ ( 𝑖 ) ] β‰₯ 𝔼 𝜎 𝑖 βˆ— , 𝜎 βˆ’ 𝑖 ⁒ [ 𝑛 𝑇 ⁒ ( 𝑖 ) ] β‰₯ 𝑛 𝑇 βˆ— ⁒ ( 𝑖 ) βˆ’ π’ͺ ⁒ ( 𝐾 ⁒ 𝑇 ⁒ log ⁑ ( 𝑇 ) ) .

To get the result for any 𝑑 ∈ [ 𝑇 ] , suppose that on the good event 𝒒 it holds that 𝑛 𝑑 ⁒ ( 𝑖 ) < 𝑛 𝑑 βˆ— ⁒ ( 𝑖 ) βˆ’ πœ” ⁒ ( 𝐾 ⁒ 𝑇 ⁒ log ⁑ ( 𝑇 ) ) . Now, recall from equation (7) that the number of rounds such that 𝑖 𝑑 βˆ—

𝑖 and 𝑖 𝑑 β‰  𝑖 is bounded by π’ͺ ⁒ ( 𝐾 ⁒ 𝑑 ⁒ log ⁑ ( 𝑇 ) ) on event 𝒒 . Hence, since we assumed that 𝑛 𝑑 ⁒ ( 𝑖 ) < 𝑛 𝑑 βˆ— ⁒ ( 𝑖 ) βˆ’ πœ” ⁒ ( 𝐾 ⁒ 𝑇 ⁒ log ⁑ ( 𝑇 ) ) and ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 βˆ’ π‘₯ 𝑑 , 𝑖 βˆ— ⟩ β‰₯ 0 , it must hold that 𝜏 𝑖 < 𝑑 . Consequently, on the good event 𝒒 , we obtain

𝑛 𝜏 𝑖 ⁒ ( 𝑖 ) ≀ 𝑛 𝑑 ⁒ ( 𝑖 ) < 𝑛 𝑑 βˆ— ⁒ ( 𝑖 ) βˆ’ πœ” ⁒ ( 𝐾 ⁒ 𝑇 ⁒ log ⁑ ( 𝑇 ) ) .

This implies that

𝔼 𝝈 ⁒ [ 𝑛 𝜏 𝑖 ⁒ ( 𝑖 ) ] < ( 1 βˆ’ 1 / 𝑇 2 ) ⁒ ( 𝑛 𝑑 βˆ— ⁒ ( 𝑖 ) βˆ’ πœ” ⁒ ( 𝐾 ⁒ 𝑇 ⁒ log ⁑ ( 𝑇 ) ) ) + 1 / 𝑇 ≀ 𝑛 𝑑 βˆ— ⁒ ( 𝑖 ) βˆ’ πœ” ⁒ ( 𝐾 ⁒ 𝑇 ⁒ log ⁑ ( 𝑇 ) ) .

This stands in contradiction to the earlier lower bound of 𝔼 𝝈 ⁒ [ 𝑛 𝑇 ⁒ ( 𝑖 ) ]

𝔼 𝝈 ⁒ [ 𝑛 𝜏 𝑖 ⁒ ( 𝑖 ) ] β‰₯ 𝑛 𝑇 βˆ— ⁒ ( 𝑖 ) βˆ’ π’ͺ ⁒ ( 𝐾 ⁒ 𝑇 ⁒ log ⁑ ( 𝑇 ) ) .

∎

Next, we provide an upper bound on the number of times an arm is pulled in any Nash equilibrium. In other words, we bound the profit any arm can make under GGTM from misreporting contexts.

Lemma D.6.

Let 𝛔 be any NE under GGTM. Then,

𝔼 𝝈 ⁒ [ 𝑛 𝑇 ⁒ ( 𝑖 ) ] ≀ 𝔼 𝝈 ⁒ [ 𝑛 𝜏 𝑖 βˆ— ⁒ ( 𝑖 ) ] + π’ͺ ⁒ ( ( 𝐾 βˆ’ 1 ) ⁒ 𝐾 ⁒ 𝑇 ⁒ log ⁑ ( 𝑇 ) )

Proof.

Note that βˆ‘ 𝑖

1 𝐾 𝑛 𝜏 βˆ— ⁒ ( 𝑖 )

βˆ‘ 𝑖

1 𝐾 βˆ‘ 𝑑

1 𝜏 πŸ™ ⁒ { 𝑖 𝑑 βˆ—

𝑖 }

𝜏 for any 𝜏 ∈ [ 𝑇 ] . Using Lemma D.5, we then obtain

𝔼 𝝈 ⁒ [ 𝑛 𝑇 ⁒ ( 𝑖 ) ]

𝔼 𝝈 ⁒ [ 𝑛 𝜏 𝑖 ⁒ ( 𝑖 ) ]

𝔼 𝝈 ⁒ [ βˆ‘ 𝑑

1 𝜏 𝑖 πŸ™ ⁒ { 𝑖 𝑑

𝑖 } ]

𝔼 𝝈 ⁒ [ βˆ‘ 𝑑

1 𝜏 𝑖 ( 1 βˆ’ πŸ™ ⁒ { 𝑖 𝑑 β‰  𝑖 } ) ]

𝔼 𝝈 ⁒ [ 𝜏 𝑖 ] βˆ’ βˆ‘ 𝑗 β‰  𝑖 𝔼 𝝈 ⁒ [ 𝑛 𝜏 𝑖 ⁒ ( 𝑗 ) ]

≀ 𝔼 𝝈 ⁒ [ 𝜏 𝑖 ] βˆ’ βˆ‘ 𝑗 β‰  𝑖 𝔼 𝝈 ⁒ [ 𝑛 𝜏 𝑖 βˆ— ⁒ ( 𝑗 ) ] + π’ͺ ⁒ ( ( 𝐾 βˆ’ 1 ) ⁒ 𝐾 ⁒ 𝑇 ⁒ log ⁑ ( 𝑇 ) )

𝔼 𝝈 ⁒ [ 𝑛 𝜏 𝑖 βˆ— ⁒ ( 𝑖 ) ] + π’ͺ ⁒ ( ( 𝐾 βˆ’ 1 ) ⁒ 𝐾 ⁒ 𝑇 ⁒ log ⁑ ( 𝑇 ) )

∎

Combining Lemma D.5 and Lemma D.6 we get for any Nash equilibrium 𝝈 ∈ NE ⁒ ( GGTM ) that

𝔼 𝝈 ⁒ [ 𝑛 𝑇 βˆ— ⁒ ( 𝑖 ) ] βˆ’ π’ͺ ⁒ ( 𝐾 ⁒ 𝑇 ⁒ log ⁑ ( 𝑇 ) ) ≀ 𝔼 𝝈 ⁒ [ 𝑛 𝑇 ⁒ ( 𝑖 ) ] ≀ 𝔼 𝝈 ⁒ [ 𝑛 𝜏 𝑖 βˆ— ⁒ ( 𝑖 ) ] βˆ’ π’ͺ ⁒ ( ( 𝐾 βˆ’ 1 ) ⁒ 𝐾 ⁒ 𝑇 ⁒ log ⁑ ( 𝑇 ) ) ,

which implies that

𝔼 𝝈 ⁒ [ 𝑛 𝑇 βˆ— ⁒ ( 𝑖 ) βˆ’ 𝑛 𝜏 𝑖 βˆ— ⁒ ( 𝑖 ) ] ≀ π’ͺ ⁒ ( 𝐾 ⁒ 𝐾 ⁒ 𝑇 ⁒ log ⁑ ( 𝑇 ) ) .

(10)

The expression 𝑛 𝑇 βˆ— ⁒ ( 𝑖 ) βˆ’ 𝑛 𝜏 𝑖 βˆ— ⁒ ( 𝑖 ) is the number of rounds where arm 𝑖 was optimal but already eliminated by the grim trigger. As a result, we can express the total number of rounds where the round-optimal arm 𝑖 𝑑 βˆ— was no longer active as follows.

Lemma D.7.

For any 𝛔 , we have

𝔼 𝝈 ⁒ [ βˆ‘ 𝑑

1 𝑇 πŸ™ ⁒ { 𝑖 𝑑 βˆ— βˆ‰ 𝐴 𝑑 } ]

βˆ‘ 𝑖

1 𝐾 𝔼 𝝈 ⁒ [ 𝑛 𝑇 βˆ— ⁒ ( 𝑖 ) βˆ’ 𝑛 𝜏 𝑖 βˆ— ⁒ ( 𝑖 ) ] .

Proof.

Rewriting { 𝑖 𝑑 βˆ— βˆ‰ 𝐴 𝑑 } yields

𝔼 𝝈 ⁒ [ βˆ‘ 𝑑

1 𝑇 πŸ™ ⁒ { 𝑖 𝑑 βˆ— βˆ‰ 𝐴 𝑑 } ]

βˆ‘ 𝑖

1 𝐾 𝔼 𝝈 ⁒ [ βˆ‘ 𝑑

1 𝑇 πŸ™ ⁒ { 𝑖 𝑑 βˆ—

𝑖 , 𝑖 βˆ‰ 𝐴 𝑑 } ]

βˆ‘ 𝑖

1 𝐾 𝔼 𝝈 ⁒ [ βˆ‘ 𝑑

1 𝑇 πŸ™ ⁒ { 𝑖 𝑑 βˆ—

𝑖 } βˆ’ βˆ‘ 𝑑

1 𝑇 πŸ™ ⁒ { 𝑖 𝑑 βˆ—

𝑖 , 𝑖 ∈ 𝐴 𝑑 } ]

βˆ‘ 𝑖

1 𝐾 𝔼 𝝈 ⁒ [ 𝑛 𝑇 βˆ— ⁒ ( 𝑖 ) βˆ’ 𝑛 𝜏 𝑖 βˆ— ⁒ ( 𝑖 ) ] .

∎

Finally, note that ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— βˆ— βˆ’ π‘₯ 𝑑 , 𝑖 𝑑 βˆ— ⟩ ≀ 1 so that from equation (10) it follows that

𝔼 ⁒ [ βˆ‘ 𝑑

1 𝑇 πŸ™ ⁒ { 𝑖 𝑑 βˆ— βˆ‰ 𝐴 𝑑 } ⁒ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— βˆ— βˆ’ π‘₯ 𝑑 , 𝑖 𝑑 βˆ— ⟩ ]
≀ 𝔼 ⁒ [ βˆ‘ 𝑑

1 𝑇 πŸ™ ⁒ { 𝑖 𝑑 βˆ— βˆ‰ 𝐴 𝑑 } ]

βˆ‘ 𝑖

1 𝐾 𝔼 ⁒ [ 𝑛 𝑇 βˆ— ⁒ ( 𝑖 ) βˆ’ 𝑛 𝜏 𝑖 βˆ— ⁒ ( 𝑖 ) ]

≀ βˆ‘ 𝑖

1 𝐾 π’ͺ ⁒ ( 𝐾 ⁒ 𝐾 ⁒ 𝑇 ⁒ log ⁑ ( 𝑇 ) )

π’ͺ ⁒ ( 𝐾 2 ⁒ 𝐾 ⁒ 𝑇 ⁒ log ⁑ ( 𝑇 ) ) .

∎

Connecting the bound on 𝐼 1 and 𝐼 2 , we then obtain the final regret bound of Theorem 4.2

𝑅 𝑇 ⁒ ( GGTM , 𝝈 ) ≀ π’ͺ ⁒ ( 𝐾 ⁒ 𝑇 ⁒ log ⁑ ( 𝑇 ) ⏟  Lemma  D.3 + 𝐾 2 ⁒ 𝐾 ⁒ 𝑇 ⁒ log ⁑ ( 𝑇 ) ⏟ Lemma  D.4 ) ≀ π’ͺ ~ ⁒ ( 𝐾 2 ⁒ 𝐾 ⁒ 𝑇 ) .

∎

Remark D.8.

We want to briefly comment on the existence of a Nash equilibrium. Since each arm’s strategy space, given by { π‘₯ ∈ ℝ 𝑑 : βˆ₯ π‘₯ βˆ₯ 2 ≀ 1 } in every round, is continuous, it is not obvious that a Nash equilibrium for the arms exists under every algorithm. However, Glickberg’s theorem shows that the continuity of the arms’ utility in the arms’ strategies is a sufficient condition for the existence of a NE, since the strategy space is compact. We can then ensure the continuity by, e.g., choosing arms proportionally to exp ⁑ ( 𝑇 ⁒ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 ⟩ ) in GGTM and exp ⁑ ( 𝑇 ⁒ UCB 𝑑 ⁒ ( π‘₯ 𝑑 , 𝑖 ) ) in OptGTM, and remarking that the probability of eliminating arm 𝑖 in round 𝑑 is continuous in π‘₯ 𝑑 , 𝑖 conditional on any history. Due to the exponential scaling with 𝑇 the effect of such slight randomization is negligible in the regret analysis.

Appendix EProof of Theorem 5.1 and Theorem 5.2

The following preliminaries are fundamental to the proofs of Theorem 5.1 and Theorem 5.2 so that we derive them jointly here.

E.1Preliminaries

We begin by recalling the definition of the least-squares estimator w.r.t. arm 𝑖 ’s reported contexts and the corresponding confidence ellipsoid 𝐢 𝑑 , 𝑖 . Note that since the arms are manipulating their contexts, the least-squares estimator may not accurately estimate πœƒ βˆ— and πœƒ βˆ— may not be contained in 𝐢 𝑑 , 𝑖 w.h.p. As discussed in the main text, since accurate estimation of πœƒ βˆ— appears hopeless, the main idea idea is to incentivize arms to report contexts such that our expected reward does not differ substantially from the observed reward.

The least-squares estimator w.r.t. arm 𝑖 is given by

πœƒ ^ 𝑑 , 𝑖

argmin πœƒ ∈ ℝ 𝑑 ( βˆ‘ β„“ < 𝑑 : 𝑖 β„“

𝑖 ( ⟨ πœƒ , π‘₯ β„“ , 𝑖 ⟩ βˆ’ π‘Ÿ β„“ , 𝑖 ) 2 + πœ† ⁒ βˆ₯ πœƒ βˆ₯ 2 2 ) ,

where πœ† > 0 . In the algorithm, we set the penalty factor to πœ†

1 . The closed form solution is then given by

πœƒ ^ 𝑑 , 𝑖

𝑉 𝑑 , 𝑖 βˆ’ 1 ⁒ βˆ‘ β„“ < 𝑑 : 𝑖 β„“

𝑖 π‘₯ β„“ , 𝑖 ⁒ π‘Ÿ β„“ , 𝑖 with 𝑉 𝑑 , 𝑖

πœ† ⁒ 𝐼 + βˆ‘ β„“ < 𝑑 : 𝑖 β„“

𝑖 π‘₯ β„“ , 𝑖 ⁒ π‘₯ β„“ , 𝑖 ⊀ .

The confidence set 𝐢 𝑑 , 𝑖 is defined as

𝐢 𝑑 , 𝑖

{ πœƒ ∈ ℝ 𝑑 : βˆ₯ πœƒ ^ 𝑑 , 𝑖 βˆ’ πœƒ βˆ₯ 𝑉 𝑑 2 ≀ 𝛽 𝑑 , 𝑖 } ,

where we let 𝛽 𝑑 , 𝑖

𝑑 ⁒ log ⁑ ( 1 + 𝑛 𝑑 ⁒ ( 𝑖 ) / πœ† 𝛿 ) + πœ† ⁒ 𝑆 with βˆ₯ πœƒ βˆ— βˆ₯ 2 ≀ 𝑆 and 𝛿

1 / 𝑇 2 .

We now translate the standard result used to assert the validity of the confidence set to our situation. Clearly, when the sequence of π‘₯ 𝑑 , 𝑖 differs significantly from the true contexts π‘₯ 𝑑 , 𝑖 βˆ— , the true parameter πœƒ βˆ— will not be contained in 𝐢 𝑑 , 𝑖 . Instead, we will formulate the concentration result as follows.

Lemma E.1.

Suppose there exists πœƒ 𝑖 βˆ— ∈ ℝ 𝑑 such that for all 𝑑 with 𝑖 𝑑

𝑖 :

⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 βˆ— ⟩

⟨ πœƒ 𝑖 βˆ— , π‘₯ 𝑑 , 𝑖 ⟩ .

(11)

In other words, the reported features π‘₯ 𝑑 , 𝑖 are linearly realizable by some parameter πœƒ 𝑖 βˆ— .

For any 𝛿 ∈ ( 0 , 1 ) let the confidence size be

𝛽 𝑑 , 𝑖

𝑑 ⁒ log ⁑ ( 1 + 𝑛 𝑑 ⁒ ( 𝑖 ) / πœ† 𝛿 ) + πœ† ⁒ 𝑆 ,

where βˆ₯ πœƒ 𝑖 βˆ— βˆ₯ 2 ≀ 𝑆 . Note that the typical expression also includes some constant 𝐿 such that βˆ₯ π‘₯ 𝑑 , 𝑖 βˆ₯ 2 ≀ 𝐿 , which we here simply set to 1 . With probability at least 1 βˆ’ 𝛿 it then holds that πœƒ 𝑖 βˆ— ∈ 𝐢 𝑑 , 𝑖 . In what follows, we choose 𝛿

1 / 𝑇 2 .

As a special case, when arm 𝑖 is always truthful so that π‘₯ 𝑑 , 𝑖

π‘₯ 𝑑 , 𝑖 βˆ— , the true parameter πœƒ βˆ— trivially satisfies (11) and the result reduces to the standard confidence bound statement [1, 24] restricted to observations from arm 𝑖 .

Proof.

Let πœƒ 𝑖 βˆ— satisfy (11). Then, note that π‘Ÿ 𝑑 , 𝑖 . .

⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 βˆ— ⟩ + πœ‚ 𝑑

⟨ πœƒ 𝑖 βˆ— , π‘₯ 𝑑 , 𝑖 ⟩ + πœ‚ 𝑑 . Hence, the sequence of reported features π‘₯ 𝑑 , 𝑖 and observed reward π‘Ÿ 𝑑 , 𝑖 yield a standard linear contextual bandit structure with unknown parameter πœƒ 𝑖 βˆ— (instead of πœƒ βˆ— ). Then, to obtain the confidence bound follow the arguments from [1, 24], where we remark that we choose the confidence radius 𝛽 𝑑 , 𝑖 arm specific. However, we could also choose a larger confidence radius such as 𝛽 𝑑 β‰ˆ 𝑑 ⁒ log ⁑ ( 𝑑 ) or even constant 𝛽 β‰ˆ 𝑑 ⁒ log ⁑ ( 𝑇 ) . This will only have a negligible effect on the final regret. ∎

The grim trigger condition (4) of OptGTM stated that arm 𝑖 gets eliminated in round 𝑑 if

βˆ‘ β„“ ≀ 𝑑 : 𝑖 β„“

𝑖 ( ⟨ πœƒ ^ β„“ , 𝑖 , π‘₯ β„“ , 𝑖 ⟩ βˆ’ 𝛽 β„“ ⁒ βˆ₯ π‘₯ β„“ , 𝑖 βˆ₯ 𝑉 β„“ , 𝑖 βˆ’ 1 ) > βˆ‘ β„“ ≀ 𝑑 : 𝑖 β„“

𝑖 π‘Ÿ β„“ , 𝑖 + 2 ⁒ 𝑛 𝑑 ⁒ ( 𝑖 ) ⁒ log ⁑ ( 𝑇 ) .

(12)

Equivalently, βˆ‘ β„“ ≀ 𝑑 : 𝑖 β„“

𝑖 LCB β„“ , 𝑖 ⁒ ( π‘₯ β„“ , 𝑖 )

UCB 𝑑 ⁒ ( π‘Ÿ ^ 𝑑 , 𝑖 ) .

As a sanity check, we show that when an arm always reports truthfully, i.e., π‘₯ 𝑑 , 𝑖

π‘₯ 𝑑 , 𝑖 βˆ— for all 𝑑 , it doesn’t get eliminated with probability at least 1 βˆ’ 1 / 𝑇 2 .

Lemma E.2.

When arm 𝑖 always reports truthfully it does not get eliminated with high probability, that is, 𝑖 ∈ 𝐴 𝑇 with probability at least 1 βˆ’ 1 / 𝑇 2 .

Proof.

We consider the event that the true parameter πœƒ βˆ— is contained in 𝐢 𝑑 , 𝑖 , i.e., 𝒒 𝑖 β€² . .

{ πœƒ βˆ— ∈ 𝐢 𝑑 , 𝑖 βˆ€ 𝑑 ∈ [ 𝑇 ] } . The event 𝒒 𝑖 β€² has probability at least 1 βˆ’ 1 / 𝑇 2 according to Lemma E.1 when arm 𝑖 is truthful. Moreover, suppose that the reward observations concentrate as well, i.e., we assume the good event 𝒒 . .

{ LCB 𝑑 ( π‘Ÿ ^ 𝑑 , 𝑖 ) ≀ π‘Ÿ ^ 𝑑 , 𝑖 βˆ— ≀ UCB 𝑑 ( π‘Ÿ ^ 𝑑 , 𝑖 ) βˆ€ 𝑑 ∈ [ 𝑇 ] , 𝑖 ∈ [ 𝐾 ] } . Recall that 𝒒 has probability at least 1 βˆ’ 1 / 𝑇 2 according to Hoeffding’s inequality. A union bound then shows that the intersection of the two events has probability at least 1 βˆ’ 2 / 𝑇 2 .

Now, since arm 𝑖 is truthful, we have π‘₯ 𝑑 , 𝑖

π‘₯ 𝑑 , 𝑖 βˆ— and ⟨ πœƒ , π‘₯ 𝑑 , 𝑖 ⟩

⟨ πœƒ , π‘₯ 𝑑 , 𝑖 βˆ— ⟩ for all πœƒ ∈ ℝ 𝑑 . Using Cauchy-Schwarz inequality and the fact that πœƒ βˆ— ∈ 𝐢 𝑑 , 𝑖 , we get that

⟨ πœƒ ^ 𝑑 , 𝑖 , π‘₯ 𝑑 , 𝑖 ⟩ βˆ’ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 βˆ— ⟩

⟨ πœƒ ^ 𝑑 , 𝑖 βˆ’ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 βˆ— ⟩ ≀ βˆ₯ πœƒ ^ 𝑑 , 𝑖 βˆ’ πœƒ βˆ— βˆ₯ 𝑉 𝑑 , 𝑖 ⁒ βˆ₯ π‘₯ 𝑑 , 𝑖 βˆ— βˆ₯ 𝑉 𝑑 , 𝑖 βˆ’ 1 ≀ 𝛽 𝑑 , 𝑖 ⁒ βˆ₯ π‘₯ 𝑑 , 𝑖 βˆ₯ 𝑉 𝑑 , 𝑖 βˆ’ 1

Moreover, as we work on the good event 𝒒 , we have

βˆ‘ β„“ ≀ 𝑑 : 𝑖 β„“

𝑖 ( ⟨ πœƒ βˆ— , π‘₯ β„“ , 𝑖 βˆ— ⟩ βˆ’ π‘Ÿ β„“ , 𝑖 ) ∈ [ βˆ’ 2 ⁒ 𝑛 𝑑 ⁒ ( 𝑖 ) ⁒ log ⁑ ( 𝑇 ) , + 2 ⁒ 𝑛 𝑑 ⁒ ( 𝑖 ) ⁒ log ⁑ ( 𝑇 ) ] .

Combining these two statements yields

βˆ‘ β„“ ≀ 𝑑 : 𝑖 β„“

𝑖 ( ⟨ πœƒ ^ β„“ , 𝑖 , π‘₯ 𝑑 , 𝑖 ⟩ βˆ’ π‘Ÿ β„“ , 𝑖 ) ≀ βˆ‘ β„“ ≀ 𝑑 : 𝑖 β„“

𝑖 𝛽 β„“ , 𝑖 ⁒ βˆ₯ π‘₯ β„“ , 𝑖 βˆ₯ 𝑉 β„“ , 𝑖 βˆ’ 1 + 2 ⁒ 𝑛 𝑑 ⁒ ( 𝑖 ) ⁒ log ⁑ ( 𝑇 )

for all 𝑑 ∈ [ 𝑇 ] . In other words, the grim trigger condition is never satisfied so that 𝑖 ∈ 𝐴 𝑇 on event 𝒒 ∩ 𝒒 𝑖 β€² , which, as we saw, occurs with probability at least 1 βˆ’ 1 / 𝑇 2 . ∎

We now analyze the grim trigger of the OptGTM algorithm. As before, let 𝜏 𝑖 . .

min { 𝑑 : 𝑖 βˆ‰ 𝐴 𝑑 } with the convention that 𝜏 𝑖

𝑇 if 𝑖 ∈ 𝐴 𝑇 . The following lemma upper bounds the total amount of manipulation that an arm can exert before being eliminated by OptGTM’s grim trigger elimination rule.

Lemma E.3.

On the good event 𝒒 :

βˆ‘ 𝑑 ≀ 𝜏 𝑖 : 𝑖 𝑑

𝑖 ( ⟨ πœƒ ^ 𝑑 , 𝑖 , π‘₯ 𝑑 , 𝑖 ⟩ βˆ’ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 βˆ— ⟩ ) ≀ βˆ‘ 𝑑 ≀ 𝜏 𝑖 : 𝑖 𝑑

𝑖 𝛽 𝑑 , 𝑖 ⁒ βˆ₯ π‘₯ 𝑑 , 𝑖 βˆ₯ 𝑉 𝑑 , 𝑖 βˆ’ 1 + 4 ⁒ 𝑛 𝜏 𝑖 ⁒ ( 𝑖 ) ⁒ log ⁑ ( 𝑇 ) .

Or equivalently, since UCB 𝑑 , 𝑖 ⁒ ( π‘₯ 𝑑 , 𝑖 )

⟨ πœƒ ^ 𝑑 , 𝑖 , π‘₯ 𝑑 , 𝑖 ⟩ + 𝛽 𝑑 , 𝑖 ⁒ βˆ₯ π‘₯ 𝑑 , 𝑖 βˆ₯ 𝑉 𝑑 , 𝑖 βˆ’ 1 , it holds that

βˆ‘ 𝑑 ≀ 𝜏 𝑖 : 𝑖 𝑑

𝑖 ( UCB 𝑑 , 𝑖 ⁒ ( π‘₯ 𝑑 , 𝑖 ) βˆ’ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 βˆ— ⟩ ) ≀ βˆ‘ 𝑑 ≀ 𝜏 𝑖 : 𝑖 𝑑

𝑖 2 ⁒ 𝛽 𝑑 , 𝑖 ⁒ βˆ₯ π‘₯ 𝑑 , 𝑖 βˆ₯ 𝑉 𝑑 , 𝑖 βˆ’ 1 + 4 ⁒ 𝑛 𝜏 𝑖 ⁒ ( 𝑖 ) ⁒ log ⁑ ( 𝑇 ) .

Proof.

Let 𝑑 ∈ [ 𝑇 ] . On the good event 𝒒 . .

{ LCB 𝑑 ( π‘Ÿ ^ 𝑑 , 𝑖 ) ≀ π‘Ÿ ^ 𝑑 , 𝑖 βˆ— ≀ UCB 𝑑 ( π‘Ÿ ^ 𝑑 , 𝑖 ) βˆ€ 𝑑 ∈ [ 𝑇 ] , 𝑖 ∈ [ 𝐾 ] } and by definition of UCB β„“ , 𝑖 ( π‘₯ β„“ , 𝑖 , it follows that

βˆ‘ β„“ ≀ 𝑑 : 𝑖 β„“

𝑖 ( ⟨ πœƒ ^ β„“ , 𝑖 , π‘₯ β„“ , 𝑖 ⟩ βˆ’ π‘Ÿ β„“ , 𝑖 )

β‰₯ βˆ‘ β„“ ≀ 𝑑 : 𝑖 β„“

𝑖 ( UCB β„“ , 𝑖 ⁒ ( π‘₯ β„“ , 𝑖 ) βˆ’ ⟨ πœƒ βˆ— , π‘₯ β„“ , 𝑖 βˆ— ⟩ ) βˆ’ βˆ‘ β„“ ≀ 𝑑 : 𝑖 β„“

𝑖 𝛽 β„“ , 𝑖 ⁒ βˆ₯ π‘₯ β„“ , 𝑖 βˆ₯ 𝑉 β„“ βˆ’ 1 , 𝑖 βˆ’ 1 βˆ’ 2 ⁒ 𝑛 𝑑 ⁒ ( 𝑖 ) ⁒ log ⁑ ( 𝑇 ) ⏟ 𝑅 . .

.

Hence, if βˆ‘ β„“ ≀ 𝑑 : 𝑖 β„“

𝑖 ( UCB β„“ , 𝑖 ⁒ ( π‘₯ β„“ , 𝑖 ) βˆ’ ⟨ πœƒ βˆ— , π‘₯ β„“ , 𝑖 βˆ— ⟩ )

2 ⁒ 𝑅 , then

βˆ‘ β„“ ≀ 𝑑 : 𝑖 β„“

𝑖 ( ⟨ πœƒ ^ β„“ , 𝑖 , π‘₯ β„“ , 𝑖 ⟩ βˆ’ π‘Ÿ β„“ , 𝑖 ) > βˆ‘ β„“ ≀ 𝑑 : 𝑖 𝑑

𝑖 𝛽 β„“ , 𝑖 ⁒ βˆ₯ π‘₯ β„“ , 𝑖 βˆ₯ 𝑉 β„“ , 𝑖 βˆ’ 1 + 2 ⁒ 𝑛 𝑑 ⁒ ( 𝑖 ) ⁒ log ⁑ ( 𝑇 ) ,

which means that arm 𝑖 must have been eliminated in a previous round or in round 𝑑 , i.e., 𝜏 𝑖

𝑑 . Hence, for any 𝑑 ≀ 𝜏 𝑖 , the the left hand side must be smaller or equal to the right hand side.

Interestingly, notice that we worked on the good event 𝒒 that only concerns the realization of the rewards and not the validity of the confidence set. This is important, since it is generally not true that the true parameter πœƒ βˆ— is contained in the confidence set 𝐢 𝑑 , 𝑖 .

∎

Lastly, before we begin with the proof Theorem 5.1 and Theorem 5.2, we establish a bound on the total exploration bonus, which after some additional work follows from the well-known elliptical potential lemma [1, 24].

Lemma E.4.

It holds that

βˆ‘ 𝑑 ≀ 𝜏 𝑖 : 𝑖 𝑑

𝑖 𝛽 𝑑 ⁒ βˆ₯ π‘₯ 𝑑 , 𝑖 βˆ₯ 𝑉 𝑑 , 𝑖 βˆ’ 1 ≀ π’ͺ ⁒ ( 𝑑 ⁒ log ⁑ ( 𝑇 ) ⁒ 𝑛 𝜏 𝑖 ⁒ ( 𝑖 ) ) .

The constant on the right hand side can be derived from the choice of 𝛽 𝑑 , 𝑖 .

Proof.

Before we can apply the elliptical potential lemma[1], we need to make sure that the exploration bonus does not blow up in early rounds. To this end, recall the definition of

𝑉 𝑑 , 𝑖 . .

πœ† 𝐼 + βˆ‘ β„“ < 𝑑 : 𝑖 β„“

𝑖 π‘₯ β„“ , 𝑖 π‘₯ β„“ , 𝑖 ⊀ .

Let 𝐴

βˆ‘ β„“ ≀ 𝑑 : 𝑖 β„“

𝑖 π‘₯ β„“ , 𝑖 ⁒ π‘₯ β„“ , 𝑖 ⊀ . Note that 𝐴 is positive semi-definite so that πœ† ⁒ 𝐼 + 𝐴 is positive definite and its inverse ( πœ† ⁒ 𝐼 + 𝐴 ) βˆ’ 1 as well. The matrix inversion lemma let’s us express this inverse as

( πœ† ⁒ 𝐼 + 𝐴 ) βˆ’ 1

πœ† ⁒ 𝐼 βˆ’ ( πœ† ⁒ 𝐼 + 𝐴 ) βˆ’ 1 ⁒ 𝐴 .

Now, the eigenvalues of 𝐡

( πœ† ⁒ 𝐼 + 𝐴 ) βˆ’ 1 ⁒ 𝐴 are given by πœ† 𝑖 / ( 1 + πœ† 𝑖 ) , where πœ† 𝑖 β‰₯ 0 are the eigenvalues of 𝐴 , which means that 𝐡 is positive semi-definite. Consequently,

βˆ₯ π‘₯ 𝑑 , 𝑖 βˆ₯ 𝑉 𝑑 , 𝑖 βˆ’ 1 2

π‘₯ 𝑑 , 𝑖 ⊀ ⁒ ( πœ† ⁒ 𝐼 + 𝐴 ) βˆ’ 1 ⁒ π‘₯ 𝑑 , 𝑖

π‘₯ 𝑑 , 𝑖 ⊀ ⁒ πœ† ⁒ π‘₯ 𝑑 , 𝑖 βˆ’ π‘₯ 𝑑 , 𝑖 ⊀ ⁒ 𝐡 ⁒ π‘₯ 𝑑 , 𝑖 ≀ πœ† ⁒ βˆ₯ π‘₯ 𝑑 , 𝑖 βˆ₯ 2 2 .

We assumed that βˆ₯ π‘₯ 𝑑 , 𝑖 βˆ₯ 2 2 ≀ 1 (similarly we could assume an upper bound 𝐿 ) so that βˆ₯ π‘₯ 𝑑 , 𝑖 βˆ₯ 𝑉 𝑑 , 𝑖 βˆ’ 1 ≀ πœ† . For convenience, we set the penalty factor to πœ†

1 . We then apply Cauchy-Schwarz to get

βˆ‘ 𝑑 ≀ 𝜏 𝑖 : 𝑖 𝑑

𝑖 𝛽 𝑑 , 𝑖 ⁒ βˆ₯ π‘₯ 𝑑 , 𝑖 βˆ₯ 𝑉 𝑑 , 𝑖 βˆ’ 1 ≀ 𝑛 𝜏 𝑖 ⁒ ( 𝑖 ) ⁒ 𝛽 𝑇 ⁒ βˆ‘ 𝑑 ≀ 𝜏 𝑖 : 𝑖 𝑑

𝑖 min ⁑ { 1 , βˆ₯ π‘₯ 𝑑 , 𝑖 βˆ₯ 𝑉 𝑑 , 𝑖 βˆ’ 1 2 } .

The elliptical potential lemma [1, 24] bounds the sum on the right hand side as

βˆ‘ 𝑑 ≀ 𝜏 𝑖 : 𝑖 𝑑

𝑖 min ⁑ { 1 , βˆ₯ π‘₯ 𝑑 , 𝑖 βˆ₯ 𝑉 𝑑 , 𝑖 βˆ’ 1 2 } ≀ 2 ⁒ 𝑑 ⁒ log ⁑ ( 𝑑 + 𝑛 𝜏 𝑖 ⁒ ( 𝑖 ) 𝑑 )

Finally, recall that we chose 𝛽 𝑑 , 𝑖

π’ͺ ⁒ ( 𝑑 ⁒ log ⁑ ( 𝑛 𝑑 ⁒ ( 𝑖 ) ) ) , which then yields the claimed bound. ∎

E.2Proof of Theorem 5.1 Proof of Theorem 5.1.

We begin by proving that being truthful is an approximate Nash equilibrium under OptGTM.

Truthfulness is a π’ͺ ~ ⁒ ( 𝑑 ⁒ 𝐾 ⁒ 𝑇 ) -NE.

In a fist step, we show that if every arm is truthful, every arm is guaranteed at least 𝑛 𝑇 βˆ— ⁒ ( 𝑖 ) βˆ’ π’ͺ ~ ⁒ ( 𝑑 ⁒ 𝑇 ) utility, where 𝑛 𝑇 βˆ— ( 𝑖 ) . .

βˆ‘ 𝑑

1 𝑇 πŸ™ { 𝑖 𝑑 βˆ—

𝑖 } . To this end, we write

𝑛 𝑇 ⁒ ( 𝑖 )

βˆ‘ 𝑑

1 𝑇 πŸ™ ⁒ { 𝑖 𝑑

𝑖 , 𝑖 𝑑 βˆ—

𝑖 } + βˆ‘ 𝑑

1 𝑇 πŸ™ ⁒ { 𝑖 𝑑

𝑖 , 𝑖 𝑑 βˆ— β‰  𝑖 }

β‰₯ βˆ‘ 𝑑

1 𝑇 πŸ™ ⁒ { 𝑖 𝑑 βˆ—

𝑖 } βˆ’ βˆ‘ 𝑑

1 𝑇 πŸ™ ⁒ { 𝑖 𝑑 βˆ—

𝑖 , 𝑖 𝑑 β‰  𝑖 } ,

and our task will be bounding the sum on the right hand side. We focus on the event that πœƒ βˆ— ∈ 𝐢 𝑑 , 𝑖 for all ( 𝑑 , 𝑖 ) ∈ [ 𝑇 ] Γ— [ 𝐾 ] , which according to Lemma E.1 occurs with probability at least 1 βˆ’ 1 / 𝑇 2 .

Since πœƒ βˆ— ∈ 𝐢 𝑑 , 𝑖 , we have UCB 𝑑 , 𝑖 ⁒ ( π‘₯ 𝑑 , 𝑖 𝑑 βˆ— ) β‰₯ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— βˆ— ⟩ for every 𝑖 ∈ [ 𝐾 ] . Let 𝑖 𝑑 βˆ—

𝑖 but 𝑖 𝑑

𝑗 with 𝑗 β‰  𝑖 . Keeping in mind that π‘₯ 𝑑 , 𝑖

π‘₯ 𝑑 , 𝑖 βˆ— for all ( 𝑑 , 𝑖 ) ∈ [ 𝑇 ] Γ— [ 𝐾 ] , since all arms are truthful, this implies that

UCB 𝑑 , 𝑖 ⁒ ( π‘₯ 𝑑 , 𝑗 βˆ— ) β‰₯ UCB 𝑑 , 𝑖 ⁒ ( π‘₯ 𝑑 , 𝑖 βˆ— ) β‰₯ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— βˆ— ⟩ .

As a result, it holds that

UCB 𝑑 , 𝑖 ⁒ ( π‘₯ 𝑑 , 𝑗 βˆ— ) βˆ’ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑗 βˆ— ⟩ β‰₯ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— βˆ— βˆ’ π‘₯ 𝑑 , 𝑗 βˆ— ⟩ .

Next, since πœƒ βˆ— ∈ 𝐢 𝑑 , 𝑖 and πœƒ ^ 𝑑 , 𝑖 ∈ 𝐢 𝑑 , 𝑖 , we find that

UCB 𝑑 , 𝑗 ⁒ ( π‘₯ 𝑑 , 𝑗 βˆ— ) βˆ’ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑗 βˆ— ⟩

≀ ⟨ πœƒ ^ 𝑑 , 𝑗 , π‘₯ 𝑑 , 𝑗 βˆ— ⟩ βˆ’ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑗 βˆ— ⟩ + 𝛽 𝑑 , 𝑗 ⁒ βˆ₯ π‘₯ 𝑑 , 𝑗 βˆ₯ 𝑉 𝑑 , 𝑗 βˆ’ 1

≀ βˆ₯ πœƒ ^ 𝑑 , 𝑗 βˆ’ πœƒ βˆ— βˆ₯ 𝑉 𝑑 , 𝑗 ⁒ βˆ₯ π‘₯ 𝑑 , 𝑗 βˆ₯ 𝑉 𝑑 , 𝑗 βˆ’ 1

≀ 𝛽 𝑑 , 𝑗 ⁒ βˆ₯ π‘₯ 𝑑 , 𝑗 βˆ₯ 𝑉 𝑑 , 𝑗 βˆ’ 1 ,

where the second line follows from Cauchy-Schwarz inequality. Using Lemma E.4, this implies

βˆ‘ 𝑑 ≀ 𝑇 : 𝑖 𝑑

𝑗 UCB 𝑑 , 𝑗 ⁒ ( π‘₯ 𝑑 , 𝑗 βˆ— ) βˆ’ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑗 βˆ— ⟩
≀ βˆ‘ 𝑑 ≀ 𝑇 : 𝑖 𝑑

𝑗 𝛽 𝑑 , 𝑗 ⁒ βˆ₯ π‘₯ 𝑑 , 𝑗 βˆ₯ 𝑉 𝑑 , 𝑗 βˆ’ 1

≀ π’ͺ ⁒ ( 𝑑 ⁒ log ⁑ ( 𝑇 ) ⁒ 𝑛 𝜏 𝑗 ⁒ ( 𝑗 ) ) .

Since ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— βˆ— βˆ’ π‘₯ 𝑑 , 𝑗 βˆ— ⟩ is constant for 𝑖 𝑑 βˆ— β‰  𝑗 , this means that

βˆ‘ 𝑑

1 𝑇 πŸ™ ⁒ { 𝑖 𝑑 βˆ—

𝑖 , 𝑖 𝑑

𝑗 } ≀ π’ͺ ⁒ ( 𝑑 ⁒ log ⁑ ( 𝑇 ) ⁒ 𝑛 𝜏 𝑗 ⁒ ( 𝑗 ) )

so that by Jensen’s inequality

βˆ‘ 𝑑

1 𝑇 πŸ™ ⁒ { 𝑖 𝑑 βˆ—

𝑖 , 𝑖 𝑑 β‰  𝑖 }

βˆ‘ 𝑗 β‰  𝑖 βˆ‘ 𝑑

1 𝑇 πŸ™ ⁒ { 𝑖 𝑑 βˆ—

𝑖 , 𝑖 𝑑

𝑗 } ≀ π’ͺ ⁒ ( 𝑑 ⁒ log ⁑ ( 𝑇 ) ⁒ 𝐾 ⁒ 𝑇 ) .

In a second step, we show that for any deviating strategy 𝜎 𝑖 that is not truthful, the utility of arm 𝑖 is upper bounded by 𝑛 𝑇 βˆ— ⁒ ( 𝑖 ) + π’ͺ ~ ⁒ ( 𝑑 ⁒ 𝑇 ) . In what follows, we work on the event that 𝑗 ∈ 𝐴 𝑑 and πœƒ βˆ— ∈ 𝐢 𝑑 , 𝑗 for all 𝑑 ∈ [ 𝑇 ] and 𝑗 β‰  𝑖 and recall that this event has probability at least 1 βˆ’ 1 / 𝑇 2 since the arms are reporting truthfully (see Lemma E.2). Since 𝑗 ∈ 𝐴 𝑇 for all 𝑗 β‰  𝑖 , we have

𝑛 𝑇 ⁒ ( 𝑖 )
≀ βˆ‘ 𝑑

1 𝑇 πŸ™ ⁒ { 𝑖 𝑑 βˆ—

𝑖 } + βˆ‘ 𝑑

1 𝑇 πŸ™ ⁒ { 𝑖 𝑑

𝑖 , 𝑖 𝑑 βˆ— β‰  𝑖 } ,

and we are tasked with bounding the sum on the right hand side appropriately. Now, similarly to before if 𝑖 𝑑

𝑖 and 𝑖 𝑑 βˆ— β‰  𝑖 (given 𝑖 𝑑 βˆ— ∈ 𝐴 𝑑 ), then

UCB 𝑑 , 𝑖 ⁒ ( π‘₯ 𝑑 , 𝑖 ) β‰₯ UCB 𝑑 , 𝑖 𝑑 βˆ— ⁒ ( π‘₯ 𝑑 , 𝑖 𝑑 βˆ— ) β‰₯ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— βˆ— ⟩ ,

where we used that πœƒ βˆ— ∈ 𝐢 𝑑 , 𝑖 𝑑 βˆ— since the arm 𝑖 𝑑 βˆ— β‰  𝑖 is truthful. Consequently,

UCB 𝑑 , 𝑖 ⁒ ( π‘₯ 𝑑 , 𝑖 ) βˆ’ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 βˆ— ⟩ β‰₯ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— βˆ— βˆ’ π‘₯ 𝑑 , 𝑖 βˆ— ⟩ .

Combining Lemma E.3 and Lemma E.4 tells us that

βˆ‘ 𝑑 ≀ 𝜏 𝑖 : 𝑖 𝑑

𝑖 UCB 𝑑 , 𝑖 ⁒ ( π‘₯ 𝑑 , 𝑖 ) βˆ’ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 βˆ— ⟩ ≀ π’ͺ ⁒ ( 𝑑 ⁒ log ⁑ ( 𝑇 ) ⁒ 𝑇 ) ,

where we coarsely upper bounded 𝑛 𝜏 𝑖 ⁒ ( 𝑖 ) by 𝑇 . Since ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— βˆ— βˆ’ π‘₯ 𝑑 , 𝑖 βˆ— ⟩ for 𝑖 𝑑 βˆ— β‰  𝑖 is positive and constant, it follows that

βˆ‘ 𝑑

1 𝑇 πŸ™ ⁒ { 𝑖 𝑑

𝑖 , 𝑖 𝑑 βˆ— β‰  𝑖 } ≀ π’ͺ ⁒ ( 𝑑 ⁒ log ⁑ ( 𝑇 ) ⁒ 𝑇 ) .

In summary, we have shown that

𝔼 𝝈 βˆ— ⁒ [ 𝑛 𝑇 ⁒ ( 𝑖 ) ] β‰₯ 𝑛 𝑇 βˆ— ⁒ ( 𝑖 ) βˆ’ π’ͺ ~ ⁒ ( 𝑑 ⁒ 𝐾 ⁒ 𝑇 )  and  𝔼 𝜎 𝑖 , 𝜎 βˆ’ 𝑖 βˆ— ⁒ [ 𝑛 𝑇 ⁒ ( 𝑖 ) ] ≀ 𝑛 𝑇 βˆ— ⁒ ( 𝑖 ) + π’ͺ ~ ⁒ ( 𝑑 ⁒ 𝑇 )

for any deviating strategy 𝜎 𝑖 . Hence, 𝝈 βˆ— is a π’ͺ ~ ⁒ ( 𝑑 ⁒ 𝐾 ⁒ 𝑇 ) -Nash equilibrium under OptGTM.

Regret analysis.

Since we maintain estimates and confidence sets for each arm independently, it is natural to decompose the regret as

βˆ‘ 𝑑

1 𝑇 ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— βˆ— βˆ’ π‘₯ 𝑑 , 𝑖 𝑑 βˆ— ⟩

βˆ‘ 𝑖

1 𝐾 βˆ‘ 𝑑 ≀ 𝑇 : 𝑖 𝑑

𝑖 ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— βˆ— βˆ’ π‘₯ 𝑑 , 𝑖 βˆ— ⟩ .

(13)

Note that w.h.p. no truthful arm gets eliminated and πœƒ βˆ— ∈ 𝐢 𝑑 , 𝑖 for all ( 𝑑 , 𝑖 ) ∈ [ 𝑇 ] Γ— [ 𝐾 ] . The regret analysis then proceeds similarly to that of LinUCB.

Since πœƒ βˆ— ∈ 𝐢 𝑑 , 𝑖 , we know that for any round such that 𝑖 𝑑

𝑖 that

⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— βˆ— ⟩ ≀ UCB 𝑑 , 𝑖 𝑑 βˆ— ⁒ ( π‘₯ 𝑑 , 𝑖 𝑑 βˆ— βˆ— )

UCB 𝑑 , 𝑖 𝑑 βˆ— ⁒ ( π‘₯ 𝑑 , 𝑖 𝑑 βˆ— ) ≀ UCB 𝑑 , 𝑖 ⁒ ( π‘₯ 𝑑 , 𝑖 )

UCB 𝑑 , 𝑖 ⁒ ( π‘₯ 𝑑 , 𝑖 βˆ— ) .

Then, again for any round with 𝑖 𝑑

𝑖 , applying Cauchy-Schwarz inequality yields

⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— βˆ— βˆ’ π‘₯ 𝑑 , 𝑖 βˆ— ⟩ ≀ UCB 𝑑 , 𝑖 ⁒ ( π‘₯ 𝑑 , 𝑖 βˆ— ) βˆ’ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 βˆ— ⟩ ≀ 2 ⁒ 𝛽 𝑑 , 𝑖 ⁒ βˆ₯ π‘₯ 𝑑 , 𝑖 βˆ₯ 𝑉 𝑑 , 𝑖 βˆ’ 1 .

(14)

Next, first using Cauchy-Schwarz inequality and the elliptical potential lemma (Lemma E.4), and then Jensen’s inequality, it follows that

2 ⁒ βˆ‘ 𝑖

1 𝐾 βˆ‘ 𝑑 ≀ 𝑇 : 𝑖 𝑑

𝑖 𝛽 𝑑 , 𝑖 ⁒ βˆ₯ π‘₯ 𝑑 , 𝑖 βˆ₯ 𝑉 𝑑 , 𝑖 βˆ’ 1 ≀ π’ͺ ⁒ ( 𝑑 ⁒ log ⁑ ( 𝑇 ) ⁒ 𝐾 ⁒ 𝑇 ) .

(15)

Hence, connecting equations (13)-(15), we obtain 𝑅 𝑇 ⁒ ( OptGTM , 𝝈 βˆ— ) ≀ π’ͺ ~ ⁒ ( 𝑑 ⁒ 𝐾 ⁒ 𝑇 ) . We see that the additional 𝐾 factor emerges due to OptGTM maintaining independent estimates for each arm. Usually a dependence on the action set size can be prevented since observations from one arm can be used for another arm as well. However, to prevent collusion in the strategic linear contextual bandit it is important to limit the influence an arm has on the selection (and elimination) of other arms. ∎

E.3Proof of Theorem 5.2 Proof of Theorem 5.2.

We begin the proof of Theorem 5.2 by decomposing the regret into two expression, which we then separately bound.

Decomposing strategic regret.

We now decompose the regret of OptGTM into the rounds 𝑑 where the optimal arm in round 𝑑 is still active and the rounds where it is not. Like before, let 𝑖 𝑑 βˆ— . .

argmax 𝑖 ∈ [ 𝐾 ] ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 βˆ— ⟩ be the optimal arm in round 𝑑 . For any 𝝈 ∈ NE ⁒ ( OptGTM ) , we have

𝑅 𝑇 ⁒ ( 𝝈 )

𝔼 𝝈 ⁒ [ βˆ‘ 𝑑

1 𝑇 πŸ™ ⁒ { 𝑖 𝑑 βˆ— ∈ 𝐴 𝑑 } ⁒ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— βˆ— βˆ’ π‘₯ 𝑑 , 𝑖 𝑑 βˆ— ⟩ ] ⏟ 𝐽 1 + 𝔼 𝝈 ⁒ [ βˆ‘ 𝑑

1 𝑇 πŸ™ ⁒ { 𝑖 𝑑 βˆ— βˆ‰ 𝐴 𝑑 } ⁒ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— βˆ— βˆ’ π‘₯ 𝑑 , 𝑖 𝑑 βˆ— ⟩ ] ⏟ 𝐽 2 .

(16)

With the help of Lemma E.3 and Lemma E.4 we now bound 𝐽 1 .

Lemma E.5 (Bounding 𝐽 1 ).
𝔼 𝝈 ⁒ [ βˆ‘ 𝑑

1 𝑇 πŸ™ ⁒ { 𝑖 𝑑 βˆ— ∈ 𝐴 𝑑 } ⁒ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— βˆ— βˆ’ π‘₯ 𝑑 , 𝑖 𝑑 βˆ— ⟩ ] ≀ π’ͺ ⁒ ( 𝑑 ⁒ 𝐾 ⁒ 𝑇 ⁒ log ⁑ ( 𝑇 ) ) .

Proof.

By design of OptGTM, we we have ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— βˆ— ⟩ ≀ UCB 𝑑 , 𝑖 𝑑 βˆ— ⁒ ( π‘₯ 𝑑 , 𝑖 𝑑 βˆ— ) ≀ UCB 𝑑 , 𝑖 𝑑 ⁒ ( π‘₯ 𝑑 , 𝑖 𝑑 ) . Then, on the good event 𝒒 , Lemma E.3 yields

βˆ‘ 𝑑 ≀ 𝜏 𝑖 : 𝑖 𝑑

𝑖 ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— βˆ— βˆ’ π‘₯ 𝑑 , 𝑖 βˆ— ⟩
≀ βˆ‘ 𝑑 ≀ 𝜏 𝑖 : 𝑖 𝑑

𝑖 ( UCB 𝑑 , 𝑖 𝑑 ⁒ ( π‘₯ 𝑑 , 𝑖 𝑑 ) βˆ’ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 βˆ— ⟩ )

≀ 2 ⁒ ( 2 ⁒ 𝑛 𝜏 𝑖 ⁒ ( 𝑖 ) ⁒ log ⁑ ( 𝑇 ) + βˆ‘ 𝑑 ≀ 𝜏 𝑖 : 𝑖 𝑑

𝑖 𝛽 𝑑 , 𝑖 ⁒ βˆ₯ π‘₯ 𝑑 , 𝑖 βˆ₯ 𝑉 𝑑 , 𝑖 βˆ’ 1 ) .

Then, on the good event, we have

βˆ‘ 𝑑

1 𝑇 πŸ™ ⁒ { 𝑖 𝑑 βˆ— ∈ 𝐴 𝑑 } ⁒ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— βˆ— βˆ’ π‘₯ 𝑑 , 𝑖 𝑑 βˆ— ⟩

βˆ‘ 𝑖

1 𝐾 βˆ‘ 𝑑

1 𝜏 𝑖 πŸ™ ⁒ { 𝑖 𝑑

𝑖 } ⁒ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— βˆ— βˆ’ π‘₯ 𝑑 , 𝑖 βˆ— ⟩

≀ βˆ‘ 𝑖

1 𝐾 βˆ‘ 𝑑 ≀ 𝜏 𝑖 : 𝑖 𝑑

𝑖 2 ⁒ 𝛽 𝑑 , 𝑖 ⁒ βˆ₯ π‘₯ 𝑑 , 𝑖 βˆ₯ 𝑉 𝑑 , 𝑖 βˆ’ 1 + βˆ‘ 𝑖

1 𝐾 2 ⁒ 𝑛 𝜏 𝑖 ⁒ ( 𝑖 ) ⁒ log ⁑ ( 𝑇 )

≀ βˆ‘ 𝑖

1 𝐾 π’ͺ ⁒ ( 𝑑 ⁒ log ⁑ ( 𝑇 ) ⁒ 𝑛 𝜏 𝑖 ⁒ ( 𝑖 ) ) + βˆ‘ 𝑖

1 𝐾 2 ⁒ 𝑛 𝜏 𝑖 ⁒ ( 𝑖 ) ⁒ log ⁑ ( 𝑇 )

≀ π’ͺ ⁒ ( 𝑑 ⁒ log ⁑ ( 𝑇 ) ⁒ 𝐾 ⁒ 𝑇 )

where we applied Jensen’s inequality in the last inequality and used that βˆ‘ 𝑖

1 𝐾 𝑛 𝜏 𝑖 ⁒ ( 𝑖 ) ≀ 𝑇 .

∎

Lemma E.6 (Bounding 𝐽 2 ).
𝔼 𝝈 ⁒ [ βˆ‘ 𝑑

1 𝑇 πŸ™ ⁒ { 𝑖 𝑑 βˆ— βˆ‰ 𝐴 𝑑 } ⁒ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— βˆ— βˆ’ π‘₯ 𝑑 , 𝑖 𝑑 βˆ— ⟩ ] ≀ π’ͺ ⁒ ( 𝑑 ⁒ 𝐾 2 ⁒ 𝐾 ⁒ 𝑇 ⁒ log ⁑ ( 𝑇 ) ) .

Proof.

The proof idea is the same as the one for Lemma D.4, which was used to show the regret upper bound of the Greedy Grim Trigger Mechanism (Theorem 4.2, Appendix D). Recall that by assumption ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— βˆ— βˆ’ π‘₯ 𝑑 , 𝑖 𝑑 βˆ— ⟩ ≀ 1 . We reuse Lemma D.7 from the proof of Theorem 4.2 to get

𝔼 𝝈 ⁒ [ βˆ‘ 𝑑

1 𝑇 πŸ™ ⁒ { 𝑖 𝑑 βˆ— βˆ‰ 𝐴 𝑑 } ⁒ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— βˆ— βˆ’ π‘₯ 𝑑 , 𝑖 𝑑 βˆ— ⟩ ] ≀ 𝔼 𝝈 ⁒ [ βˆ‘ 𝑑

1 𝑇 πŸ™ ⁒ { 𝑖 𝑑 βˆ— βˆ‰ 𝐴 𝑑 } ]

βˆ‘ 𝑖

1 𝐾 𝔼 𝝈 ⁒ [ 𝑛 𝑇 βˆ— ⁒ ( 𝑖 ) βˆ’ 𝑛 𝜏 𝑖 βˆ— ⁒ ( 𝑖 ) ] ,

where 𝑛 𝑑 βˆ— ( 𝑖 ) . .

βˆ‘ 𝑠

1 𝑑 πŸ™ { 𝑖 𝑠 βˆ—

𝑖 } is the number of rounds up to round 𝑑 that 𝑖 is the optimal. To bound the right hand side, we first prove a lower bound on 𝔼 𝝈 ⁒ [ 𝑛 𝑇 ⁒ ( 𝑖 ) ] for any NE 𝝈 ∈ NE ⁒ ( OptGTM ) .

Lemma E.7.

Let 𝛔 ∈ NE ⁒ ( OptGTM ) . It holds that

𝔼 𝝈 ⁒ [ 𝑛 𝑇 ⁒ ( 𝑖 ) ] β‰₯ 𝑛 𝑇 βˆ— ⁒ ( 𝑖 ) βˆ’ π’ͺ ⁒ ( 𝑑 ⁒ log ⁑ ( 𝑇 ) ⁒ 𝐾 ⁒ 𝑇 ) .

In particular, it holds that 𝔼 𝛔 ⁒ [ 𝑛 𝑑 ⁒ ( 𝑖 ) ] β‰₯ 𝑛 𝑑 βˆ— ⁒ ( 𝑖 ) βˆ’ π’ͺ ⁒ ( 𝑑 ⁒ log ⁑ ( 𝑇 ) ⁒ 𝐾 ⁒ 𝑇 ) for 𝑑 ∈ [ 𝑇 ] .

Since 𝒒 occurs with probability 1 βˆ’ 1 / 𝑇 2 and 𝑛 𝑇 ⁒ ( 𝑖 ) ≀ 𝑇 by definition, on event 𝒒 , we have

𝑛 𝑇 ⁒ ( 𝑖 ) β‰₯ 𝑛 𝑇 βˆ— ⁒ ( 𝑖 ) βˆ’ π’ͺ ⁒ ( 𝑑 ⁒ log ⁑ ( 𝑇 ) ⁒ 𝐾 ⁒ 𝑇 ) 𝑛 𝑑 ⁒ ( 𝑖 ) β‰₯ 𝑛 𝑑 βˆ— ⁒ ( 𝑖 ) βˆ’ π’ͺ ⁒ ( 𝑑 ⁒ log ⁑ ( 𝑇 ) ⁒ 𝐾 ⁒ 𝑇 ) .

Proof.

Given that arm 𝑖 always reports truthfully, i.e., π‘₯ 𝑑 , 𝑖

π‘₯ 𝑑 , 𝑖 βˆ— for all 𝑑 , consider the event that πœƒ βˆ— ∈ 𝐢 𝑑 , 𝑖 for all 𝑑 . Recall that this event has probability at least 1 βˆ’ 1 / 𝑇 2 according to Lemma E.1.

We use the best response property of the Nash equilibrium by comparing against the truthful strategy. To this end, consider the strategy profile ( 𝜎 𝑖 βˆ— , 𝜎 βˆ’ 𝑖 ) and the event that πœƒ βˆ— ∈ 𝐢 𝑑 , 𝑖 for all 𝑑 as well as 𝒒 . We then have that

𝑛 𝑇 ⁒ ( 𝑖 )
β‰₯ βˆ‘ 𝑑

1 𝑇 πŸ™ ⁒ { 𝑖 𝑑 βˆ—

𝑖 } βˆ’ βˆ‘ 𝑑

1 𝑇 πŸ™ ⁒ { 𝑖 𝑑 βˆ—

𝑖 , 𝑖 𝑑 β‰  𝑖 }

𝑛 𝑇 βˆ— ⁒ ( 𝑖 ) βˆ’ βˆ‘ 𝑗 β‰  𝑖 βˆ‘ 𝑑

1 𝜏 𝑖 πŸ™ ⁒ { 𝑖 𝑑 βˆ—

𝑖 , 𝑖 𝑑

𝑗 } ,

(17)

where the sum on the right hand side is the number of rounds where 𝑖 is optimal but OptGTM pulls another arm (because it has reported a larger optimistic value).

Next, recall that πœƒ βˆ— ∈ 𝐢 𝑑 , 𝑖 so that for 𝑖 𝑑 βˆ—

𝑖 it follows that

UCB 𝑑 , 𝑖 ⁒ ( π‘₯ 𝑑 , 𝑖 )

UCB 𝑑 , 𝑖 ⁒ ( π‘₯ 𝑑 , 𝑖 βˆ— ) β‰₯ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 βˆ— ⟩

⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— βˆ— ⟩ .

Now, 𝑖 𝑑

𝑗 implies UCB 𝑑 , 𝑗 ⁒ ( π‘₯ 𝑑 , 𝑗 ) β‰₯ UCB 𝑑 , 𝑖 ⁒ ( π‘₯ 𝑑 , 𝑖 ) β‰₯ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— βˆ— ⟩ , since 𝑖 ∈ 𝐴 𝑑 for all 𝑑 and OptGTM selects the arm with maximal optimistic value. As a result, when 𝑖 𝑑 βˆ—

𝑖 and 𝑖 𝑑

𝑗 , we obtain that

UCB 𝑑 , 𝑗 ⁒ ( π‘₯ 𝑑 , 𝑗 ) βˆ’ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑗 βˆ— ⟩ β‰₯ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— βˆ— βˆ’ π‘₯ 𝑑 , 𝑗 βˆ— ⟩ .

Importantly, we have shown in Lemma E.3 that the total difference on the left hand side is bounded before elimination, i.e., before round 𝜏 𝑗 . As a consequence, we get

βˆ‘ 𝑑

1 𝜏 𝑖 πŸ™ ⁒ { 𝑖 𝑑 βˆ—

𝑖 , 𝑖 𝑑

𝑗 } ⁒ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— βˆ— βˆ’ π‘₯ 𝑑 , 𝑗 βˆ— ⟩
≀ βˆ‘ 𝑑

1 𝜏 𝑖 πŸ™ ⁒ { 𝑖 𝑑 βˆ—

𝑖 , 𝑖 𝑑

𝑗 } ⁒ ( UCB 𝑑 , 𝑗 ⁒ ( π‘₯ 𝑑 , 𝑗 ) βˆ’ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑗 βˆ— ⟩ )

≀ βˆ‘ 𝑑

1 𝜏 𝑖 πŸ™ ⁒ { 𝑖 𝑑

𝑗 } ⁒ ( UCB 𝑑 , 𝑗 ⁒ ( π‘₯ 𝑑 , 𝑗 ) βˆ’ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑗 βˆ— ⟩ )

≀ 2 ⁒ βˆ‘ 𝑑

1 𝜏 𝑗 πŸ™ ⁒ { 𝑖 𝑑

𝑗 } ⁒ 𝛽 𝑑 , 𝑗 ⁒ βˆ₯ π‘₯ 𝑑 , 𝑗 βˆ₯ 𝑉 𝑑 , 𝑗 βˆ’ 1 + 4 ⁒ 𝑛 𝜏 𝑗 ⁒ ( 𝑖 ) ⁒ log ⁑ ( 𝑇 )

≀ π’ͺ ⁒ ( 𝑑 ⁒ log ⁑ ( 𝑇 ) ⁒ 𝑛 𝜏 𝑗 ⁒ ( 𝑗 ) ) + 4 ⁒ 𝑛 𝜏 𝑗 ⁒ ( 𝑖 ) ⁒ log ⁑ ( 𝑇 )

≀ π’ͺ ⁒ ( 𝑑 ⁒ log ⁑ ( 𝑇 ) ⁒ 𝑛 𝜏 𝑗 ⁒ ( 𝑗 ) ) ,

where we first used Lemma E.3 and then Lemma E.4. Recalling that ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— βˆ— βˆ’ π‘₯ 𝑑 , 𝑗 βˆ— ⟩

0 is constant for 𝑗 β‰  𝑖 𝑑 βˆ— by assumption of a constant optimality gap, it then follows from Jensen’s inequality that

βˆ‘ 𝑗 β‰  𝑖 βˆ‘ 𝑑

1 𝜏 𝑖 πŸ™ ⁒ { 𝑖 𝑑 βˆ—

𝑖 , 𝑖 𝑑

𝑗 } ≀ βˆ‘ 𝑗 β‰  𝑖 π’ͺ ⁒ ( 𝑑 ⁒ log ⁑ ( 𝑇 ) ⁒ 𝑛 𝜏 𝑗 ⁒ ( 𝑗 ) ) ≀ π’ͺ ⁒ ( 𝑑 ⁒ log ⁑ ( 𝑇 ) ⁒ 𝐾 ⁒ 𝑇 ) ,

where we used that βˆ‘ 𝑗 β‰  𝑖 𝑛 𝜏 𝑗 ⁒ ( 𝑗 ) ≀ 𝑇 . Hence, equation (E.3) yields

𝑛 𝑇 ⁒ ( 𝑖 ) β‰₯ 𝑛 𝑇 βˆ— ⁒ ( 𝑖 ) βˆ’ π’ͺ ⁒ ( 𝑑 ⁒ log ⁑ ( 𝑇 ) ⁒ 𝐾 ⁒ 𝑇 ) .

Since 𝜎 𝑖 must be a best response to 𝜎 βˆ’ 𝑖 , we obtain

𝔼 𝝈 ⁒ [ 𝑛 𝑇 ⁒ ( 𝑖 ) ] β‰₯ 𝔼 𝜎 𝑖 βˆ— , 𝜎 βˆ’ 𝑖 ⁒ [ 𝑛 𝑇 ⁒ ( 𝑖 ) ] β‰₯ 𝑛 𝑇 βˆ— ⁒ ( 𝑖 ) βˆ’ π’ͺ ⁒ ( 𝑑 ⁒ log ⁑ ( 𝑇 ) ⁒ 𝐾 ⁒ 𝑇 ) .

To obtain the result for 𝑑 ∈ [ 𝑇 ] l, suppose that on the good event 𝒒 the contrary is true so that 𝑛 𝑑 ⁒ ( 𝑖 ) < 𝑛 𝑑 βˆ— ⁒ ( 𝑖 ) βˆ’ πœ” ⁒ ( 𝑑 ⁒ log ⁑ ( 𝑇 ) ⁒ 𝐾 ⁒ 𝑇 ) . However, similarly to before, Lemma E.3 tells us that the number of rounds that are β€œpoached” from arm 𝑖 , i.e., 𝑖 𝑑 βˆ—

𝑖 and 𝑖 𝑑 β‰  𝑖 , is upper bounded from above by π’ͺ ⁒ ( 𝑑 ⁒ log ⁑ ( 𝑇 ) ⁒ 𝐾 ⁒ 𝑑 ) . Hence, since UCB 𝑑 , 𝑖 ⁒ ( π‘₯ 𝑑 , 𝑖 ) β‰₯ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— ⟩ and 𝑛 𝑑 ⁒ ( 𝑖 ) ≀ 𝑛 𝑑 βˆ— ⁒ ( 𝑖 ) βˆ’ πœ” ⁒ ( 𝑑 ⁒ log ⁑ ( 𝑇 ) ⁒ 𝐾 ⁒ 𝑇 ) , it must hold that 𝜏 𝑖 < 𝑑 . Then, on the good event 𝒒 , it follows that 𝑛 𝜏 𝑖 ⁒ ( 𝑖 ) ≀ 𝑛 𝑑 ⁒ ( 𝑖 ) < 𝑛 𝑑 βˆ— ⁒ ( 𝑖 ) βˆ’ πœ” ⁒ ( 𝑑 ⁒ log ⁑ ( 𝑇 ) ⁒ 𝐾 ⁒ 𝑇 ) . Since event 𝒒 has probability at least 1 βˆ’ 1 / 𝑇 2 , this implies

𝔼 𝝈 ⁒ [ 𝑛 𝑇 ⁒ ( 𝑖 ) ]

𝔼 𝝈 ⁒ [ 𝑛 𝜏 𝑖 ⁒ ( 𝑖 ) ] ≀ 𝑛 𝑑 βˆ— ⁒ ( 𝑖 ) βˆ’ πœ” ⁒ ( 𝑑 ⁒ log ⁑ ( 𝑇 ) ⁒ 𝐾 ⁒ 𝑇 ) .

This contradicts the lower bound of 𝔼 𝝈 ⁒ [ 𝑛 𝑇 ⁒ ( 𝑖 ) ] β‰₯ 𝑛 𝑇 βˆ— ⁒ ( 𝑖 ) βˆ’ π’ͺ ⁒ ( 𝑑 ⁒ log ⁑ ( 𝑇 ) ⁒ 𝐾 ⁒ 𝑇 ) .

∎

Lemma E.8.

Let 𝛔 ∈ NE ⁒ ( OptGTM ) . Then,

𝔼 𝝈 ⁒ [ 𝑛 𝑇 ⁒ ( 𝑖 ) ] ≀ 𝔼 𝝈 ⁒ [ 𝑛 𝜏 𝑖 βˆ— ⁒ ( 𝑖 ) ] βˆ’ π’ͺ ⁒ ( 𝑑 ⁒ log ⁑ ( 𝑇 ) ⁒ 𝐾 ⁒ 𝐾 ⁒ 𝑇 ) ,

where the expectation on the right hand side is taken w.r.t.  𝜏 𝑖 .

Proof.

We have βˆ‘ 𝑖

1 𝐾 𝑛 𝜏 βˆ— ⁒ ( 𝑖 )

βˆ‘ 𝑖

1 𝐾 βˆ‘ 𝑑

1 𝜏 πŸ™ ⁒ { 𝑖 𝑑 βˆ—

𝑖 }

𝜏 for any 𝜏 ∈ [ 𝑇 ] . Using Lemma E.7, we obtain

𝔼 𝝈 ⁒ [ 𝑛 𝑇 ⁒ ( 𝑖 ) ]

𝔼 𝝈 ⁒ [ 𝑛 𝜏 𝑖 ⁒ ( 𝑖 ) ]

𝔼 𝝈 ⁒ [ βˆ‘ 𝑑

1 𝜏 𝑖 πŸ™ ⁒ { 𝑖 𝑑

𝑖 } ]

𝔼 𝝈 ⁒ [ βˆ‘ 𝑑

1 𝜏 𝑖 ( 1 βˆ’ πŸ™ ⁒ { 𝑖 𝑑 β‰  𝑖 } ) ]

𝔼 𝝈 ⁒ [ 𝜏 𝑖 ] βˆ’ βˆ‘ 𝑗 β‰  𝑖 𝔼 𝝈 ⁒ [ 𝑛 𝜏 𝑖 ⁒ ( 𝑗 ) ]

≀ 𝔼 𝝈 ⁒ [ 𝜏 𝑖 ] βˆ’ βˆ‘ 𝑗 β‰  𝑖 𝔼 𝝈 ⁒ [ 𝑛 𝜏 𝑖 βˆ— ⁒ ( 𝑗 ) ] + π’ͺ ⁒ ( 𝑑 ⁒ log ⁑ ( 𝑇 ) ⁒ 𝐾 ⁒ 𝐾 ⁒ 𝑇 )

𝔼 𝝈 ⁒ [ 𝑛 𝜏 𝑖 βˆ— ⁒ ( 𝑖 ) ] + π’ͺ ⁒ ( 𝑑 ⁒ log ⁑ ( 𝑇 ) ⁒ 𝐾 ⁒ 𝐾 ⁒ 𝑇 ) .

∎

Combing the lower and upper bounds on each arm’s utility of Lemma E.7 and Lemma E.8, we get

𝑛 𝑇 βˆ— ⁒ ( 𝑖 ) βˆ’ π’ͺ ⁒ ( 𝑑 ⁒ log ⁑ ( 𝑇 ) ⁒ 𝐾 ⁒ 𝑇 ) ≀ 𝔼 𝝈 ⁒ [ 𝑛 𝑇 ⁒ ( 𝑖 ) ] ≀ 𝔼 𝝈 ⁒ [ 𝑛 𝜏 𝑖 βˆ— ⁒ ( 𝑖 ) ] βˆ’ π’ͺ ⁒ ( 𝑑 ⁒ log ⁑ ( 𝑇 ) ⁒ 𝐾 ⁒ 𝐾 ⁒ 𝑇 ) .

(18)

It then follows that

𝔼 𝝈 ⁒ [ 𝑛 𝑇 βˆ— ⁒ ( 𝑖 ) βˆ’ 𝑛 𝜏 𝑖 βˆ— ⁒ ( 𝑖 ) ] ≀ π’ͺ ⁒ ( 𝑑 ⁒ log ⁑ ( 𝑇 ) ⁒ 𝐾 ⁒ 𝐾 ⁒ 𝑇 )

so that

βˆ‘ π‘˜

1 𝐾 𝔼 𝝈 ⁒ [ 𝑛 𝑇 βˆ— ⁒ ( 𝑖 ) βˆ’ 𝑛 𝜏 𝑖 βˆ— ⁒ ( 𝑖 ) ] ≀ π’ͺ ⁒ ( 𝑑 ⁒ log ⁑ ( 𝑇 ) ⁒ 𝐾 2 ⁒ 𝐾 ⁒ 𝑇 ) .

This concludes the proof of Lemma E.6. ∎

Finally, recalling the regret decomposition from the beginning of the proof and using Lemma E.5 and Lemma E.6, we obtain for any 𝝈 ∈ NE ⁒ ( 𝝈 ) that

𝑅 𝑇 ⁒ ( GGTM , 𝝈 ) ≀ π’ͺ ⁒ ( 𝑑 ⁒ log ⁑ ( 𝑇 ) ⁒ 𝐾 ⁒ 𝑇 ⏟  Lemma  E.5 + 𝑑 ⁒ log ⁑ ( 𝑇 ) ⁒ 𝐾 2 ⁒ 𝐾 ⁒ 𝑇 ⏟ Lemma  E.6 ) ≀ π’ͺ ~ ⁒ ( 𝑑 ⁒ 𝐾 2 ⁒ 𝐾 ⁒ 𝑇 ) .

∎

E.4Linear Realizability of Reported Contexts

In the following, we comment on an interesting observation in the strategic linear contextual bandit that may also provide some insight into the effectiveness of OptGTM. Suppose that each arm reports its contexts in a linearly realizable fashion (without us explicitly incentivizing them to do so). Formally, we can express this as the following assumption.

Assumption 3 (Linear Realizability of Reported Contexts).

Every arm reports so that its reports follow some linear reward model. That is, for every arm 𝑖 ∈ [ 𝐾 ] , there exists a vector πœƒ 𝑖 βˆ— ∈ ℝ 𝑑 such that for all 𝑑 ∈ [ 𝑇 ]

⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 βˆ— ⟩

⟨ πœƒ 𝑖 βˆ— , π‘₯ 𝑑 , 𝑖 ⟩ .

(19)

Perhaps surprisingly, the regret analysis of OptGTM becomes straightforward when the arms’ strategies satisfy Assumption 3. Moreover, we can prove that OptGTM suffers π’ͺ ~ ⁒ ( 𝑑 ⁒ 𝐾 ⁒ 𝑇 ) strategic regret in every Nash equilibrium of the arms. That is, the regret guarantee is better than that of Theorem 5.2.

A quick regret analysis.

Let 𝝈 be any NE under OptGTM. When we observe a reward π‘Ÿ 𝑑 , 𝑖 after pulling arm 𝑖 in round 𝑑 , we can interpret the reward as π‘Ÿ 𝑑 , 𝑖 . .

⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 βˆ— ⟩ + πœ‚ 𝑑

⟨ πœƒ 𝑖 βˆ— , π‘₯ 𝑑 , 𝑖 ⟩ + πœ‚ 𝑑 . Hence, isolating arm 𝑖 , the learner is essentially playing a linear contextual bandit with true unknown parameter πœƒ 𝑖 βˆ— , contexts π‘₯ 𝑑 , 𝑖 , and rewards π‘Ÿ 𝑑 , 𝑖

⟨ πœƒ 𝑖 βˆ— , π‘₯ 𝑑 , 𝑖 ⟩ + πœ‚ 𝑑 . As a result, the independent estimators πœƒ ^ 𝑑 , 𝑖 for every arm 𝑖 , are in fact estimating πœƒ 𝑖 βˆ— and, according to Lemma E.1, with high probability πœƒ 𝑖 βˆ— ∈ 𝐢 𝑑 , 𝑖 . It is then also easy to see that OptGTM will never eliminate any of the arms with high probability. Now, since πœƒ 𝑖 βˆ— ∈ 𝐢 𝑑 , 𝑖 ,

⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 βˆ— ⟩

⟨ πœƒ 𝑖 βˆ— , π‘₯ 𝑑 , 𝑖 ⟩ ≀ UCB 𝑑 , 𝑖 ⁒ ( π‘₯ 𝑑 , 𝑖 ) .

As a result, using Cauchy Schwarz inequality, we obtain

UCB 𝑑 , 𝑖 ⁒ ( π‘₯ 𝑑 , 𝑖 ) βˆ’ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 βˆ— ⟩

≀ ⟨ πœƒ ^ 𝑑 , 𝑖 βˆ’ πœƒ 𝑖 βˆ— , π‘₯ 𝑑 , 𝑖 ⟩ + 𝛽 𝑑 , 𝑖 ⁒ βˆ₯ π‘₯ 𝑑 , 𝑖 βˆ₯ 𝑉 𝑑 , 𝑖 βˆ’ 1

≀ βˆ₯ πœƒ ^ 𝑑 , 𝑖 βˆ’ πœƒ 𝑖 βˆ— βˆ₯ 𝑉 𝑑 , 𝑖 ⁒ βˆ₯ π‘₯ 𝑑 , 𝑖 βˆ₯ 𝑉 𝑑 , 𝑖 βˆ’ 1 + 𝛽 𝑑 , 𝑖 ⁒ βˆ₯ π‘₯ 𝑑 , 𝑖 βˆ₯ 𝑉 𝑑 , 𝑖 βˆ’ 1 ≀ 2 ⁒ 𝛽 𝑑 , 𝑖 ⁒ βˆ₯ π‘₯ 𝑑 , 𝑖 βˆ₯ 𝑉 𝑑 , 𝑖 βˆ’ 1 .

In every round 𝑑 , OptGTM selects the arm with maximal optimistic reported value UCB 𝑑 , 𝑖 ⁒ ( π‘₯ 𝑑 , 𝑖 ) so that UCB 𝑑 , 𝑖 𝑑 βˆ— ⁒ ( π‘₯ 𝑑 , 𝑖 𝑑 βˆ— ) βˆ’ UCB 𝑑 , 𝑖 𝑑 ⁒ ( π‘₯ 𝑑 , 𝑖 𝑑 ) ≀ 0 . We can then bound the instantaneous regret in round 𝑑 as

⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— βˆ— βˆ’ π‘₯ 𝑑 , 𝑖 𝑑 βˆ— ⟩

≀ UCB 𝑑 , 𝑖 𝑑 βˆ— ⁒ ( π‘₯ 𝑑 , 𝑖 𝑑 βˆ— ) βˆ’ ⟨ πœƒ βˆ— , π‘₯ 𝑑 , 𝑖 𝑑 βˆ— ⟩

≀ UCB 𝑑 , 𝑖 𝑑 βˆ— ⁒ ( π‘₯ 𝑑 , 𝑖 𝑑 βˆ— ) βˆ’ UCB 𝑑 , 𝑖 𝑑 ⁒ ( π‘₯ 𝑑 , 𝑖 𝑑 ) + 2 ⁒ 𝛽 𝑑 , 𝑖 𝑑 ⁒ βˆ₯ π‘₯ 𝑑 , 𝑖 𝑑 βˆ₯ 𝑉 𝑑 , 𝑖 𝑑 βˆ’ 1

≀ 2 ⁒ 𝛽 𝑑 , 𝑖 𝑑 ⁒ βˆ₯ π‘₯ 𝑑 , 𝑖 𝑑 βˆ₯ 𝑉 𝑑 , 𝑖 𝑑 βˆ’ 1 .

Using the elliptical potential lemma and Jensen’s inequality (Lemma E.4, [1, 24]), the total regret of OptGTM is given by

𝑅 𝑇 ⁒ ( OptGTM , 𝝈 ) ≀ βˆ‘ 𝑖

1 𝐾 βˆ‘ 𝑑 ≀ 𝑇 : 𝑖 𝑑

𝑖 2 ⁒ 𝛽 𝑑 , 𝑖 ⁒ βˆ₯ π‘₯ 𝑑 , 𝑖 βˆ₯ 𝑉 𝑑 , 𝑖 βˆ’ 1 ≀ π’ͺ ⁒ ( 𝑑 ⁒ log ⁑ ( 𝑇 ) ⁒ 𝐾 ⁒ 𝑇 ) .

We have thus shown the following guarantee.

Corollary E.9.

Suppose that Assumption 3 holds. Then,

𝑅 𝑇 ⁒ ( OptGTM , 𝝈 )

π’ͺ ~ ⁒ ( 𝑑 ⁒ 𝐾 ⁒ 𝑇 )

for every Nash equilibrium 𝛔 ∈ NE ⁒ ( OptGTM ) .

Report Issue Report Issue for Selection Generated by L A T E xml Instructions for reporting errors

We are continuing to improve HTML versions of papers, and your feedback helps enhance accessibility and mobile support. To report errors in the HTML that will help us improve conversion and rendering, choose any of the methods listed below:

Click the "Report Issue" button. Open a report feedback form via keyboard, use "Ctrl + ?". Make a text selection and click the "Report Issue for Selection" button near your cursor. You can use Alt+Y to toggle on and Alt+Shift+Y to toggle off accessible reporting links at each section.

Our team has already identified the following issues. We appreciate your time reviewing and reporting rendering errors we may not have found yet. Your efforts will help us improve the HTML versions for all readers, because disability should not be a barrier to accessing research. Thank you for your continued support in championing open access for all.

Have a free development cycle? Help support accessibility at arXiv! Our collaborators at LaTeXML maintain a list of packages that need conversion, and welcome developer contributions.

Xet Storage Details

Size:
136 kB
Β·
Xet hash:
334aa42df88b9643750cc65146b68ff3872ee938bd3b1bc04e19a432e7d20925

Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.