Meta-learning with Stochastic Linear Bandits
Leonardo Cella¹² Alessandro Lazaric³ Massimiliano Pontil²
Abstract
We investigate meta-learning procedures in the setting of stochastic linear bandits tasks. The goal is to select a learning algorithm which works well on average over a class of bandits tasks, that are sampled from a task-distribution. Inspired by recent work on learning-to-learn linear regression, we consider a class of bandit algorithms that implement a regularized version of the well-known OFUL algorithm, where the regularization is a square euclidean distance to a bias vector. We first study the benefit of the biased OFUL algorithm in terms of regret minimization. We then propose two strategies to estimate the bias within the learning-to-learn setting. We show both theoretically and experimentally, that when the number of tasks grows and the variance of the task-distribution is small, our strategies have a significant advantage over learning the tasks in isolation.
1. Introduction
The multi-armed bandit MAB (Lattimore & Szepesvári, 2020; Auer et al., 2002; Siegmund, 2003; Robbins, 1952; Cesa-Bianchi, 2016; Bubeck et al., 2012) is a simple framework formalizing the online learning problem constrained to partial feedback. In the last decades it has receiving increasing attention due to its wide practical importance and the theoretical challenges in designing principled and efficient learning algorithms. In particular, applications range from recommender systems (Li et al., 2010; Cella & Cesa-Bianchi, 2019; Bogers, 2010), to clinical trials (Villar et al., 2015), and to adaptive routing (Awerbuch & Kleinberg, 2008), among others.
In this paper, we are concerned with linear bandits (Abbasi-Yadkori et al., 2011; Chu et al., 2011; Auer, 2003), a con-
solidated MAB setting in which each arm is associated with a vector of features and the arm payoff function is modeled by a (unknown) linear regression of the arm feature vector. Our study builds upon the OFUL algorithm introduced in (Abbasi-Yadkori et al., 2011), which in turn improved the theoretical analysis initially investigated in (Chu et al., 2011; Auer, 2003). Nonetheless, it may still require a long exploration in order to estimate well the unknown linear regression vector. An appealing approach to solve this bottleneck is to leverage already completed tasks by transferring the previously collected experience to speedup the learning process. This framework finds its most common application in the recommendation system domain, where we wish to recommend contents to a new user by matching his preference. Our objective is to rely on past interactions corresponding to navigation of different users to speedup the learning process.
Previous Work During the past decade, there have been numerous theoretical investigation of transfer learning, with a particular attention to the problems of multi-task (MTL) (Ando & Zhang, 2005; Maurer & Pontil, 2013; Maurer et al., 2013; 2016; Cavallanti et al., 2010) and learning-to-learn (LTL) or meta-learning (Baxter, 2000; Alquier et al., 2017; Denevi et al., 2018a;b; 2019; Pentina & Urner, 2016). The main difference between these two settings is that MTL aims to solve the problem of learning well on a prescribed set of tasks (the learned model is tested on the same tasks used during training), whereas LTL studies the problem of selecting a learning algorithm that works well on tasks from a common environment (i.e. sampled from a prescribed distribution), relying on already completed tasks from the same environment (Pentina & Urner, 2016; Balcan et al., 2019; Denevi et al., 2018a; 2019). In either case the base tasks considered have always been supervised learning ones. Recently, the MTL setting has been extended to a class of bandit tasks, with encouraging results empirically and theoretically (Azar et al., 2013; Calandriello et al., 2014; Zhang & Bareinboim, 2017; Deshmukh et al., 2017; Liu et al., 2018), as well as the case where tasks belong to a (social) graph, a setting that is usually referred to as collaborative linear bandit (Cesa-Bianchi et al., 2013; Soare et al., 2014; Gentile et al., 2014; 2017). Differently from these works, the principal goal of this paper is to investigate the adoption of the meta-learning framework, which has been
¹University of Milan, ²Istituto Italiano di Tecnologia, ³Facebook AI Research. Correspondence to: Leonardo Cella leonardocella@gmail.com.
Proceedings of the 37th International Conference on Machine Learning, Online, PMLR 119, 2020. Copyright 2020 by the author(s).