| { |
| "title": "Online Causal Inference for Advertising in Real-Time Bidding AuctionsThis paper was previously circulated under the title “Online inference for advertising auctions.” We thank Nan Xu for close collaboration at earlier stages of this paper. We thank Tong Geng, Jun Hao, Xiliang Lin, Lei Wu, Paul Yan, Bo Zhang, Liang Zhang and Lizhou Zheng from JD.com for their collaboration; seminar participants at Cornell Johnson, Berkeley EECS, Stanford GSB: OIT/Marketing, UCSD Rady, Yale SOM and Insper; at the 2019 Conference on Structural Dynamic Models (Chicago Booth), the 2019 Midwest IO Fest, the 2020 Conference on AI/ML and Business Analytics (Temple Fox), the 2020 Marketing Science Conference, the May 2021 QME Rossi Seminar, and the 18th SICS Conference; Mohsen Bayati, Rob Bray, Isa Chaves, Shi Dong, Yoni Gur, Yanjun Han, Günter Hitsch, Lalit Jain, Blake McShane, Kanishka Misra, Sanjog Misra, Rob Porter, Adam Smith, Raluca Ursu, Ben Van Roy and Stefan Wager in particular for helpful comments; and Caroline Wang and especially Vitalii Tubdenov for excellent research assistance. Please contact the authors at caio.waisman@kellogg.northwestern.edu (Waisman), hsnair@gmail.com (Nair) or carrion@gatech.edu (Carrion) for correspondence.", |
| "abstract": "Real-time bidding systems, which utilize auctions to allocate user impressions to competing advertisers, continue to enjoy success in digital advertising. Assessing the effectiveness of such advertising remains a challenge in research and practice. This paper proposes a new approach to perform causal inference on advertising bought through such mechanisms. Leveraging the economic structure of first- and second-price auctions, we establish novel results that show how the effects of advertising are connected to and hence identified from optimal bids. Importantly, we also outline the precise conditions under which these relationships hold. Since these optimal bids are required to estimate the effects of advertising, we present an adapted Thompson Sampling algorithm to solve a multi-armed bandit problem that succeeds in recovering such bids and, consequently, the effects of advertising, while minimizing the costs of experimentation. We use data from real-time bidding auctions to show that it outperforms commonly used methods to estimate the effects of advertising.", |
| "sections": [ |
| { |
| "section_id": "1", |
| "parent_section_id": null, |
| "section_name": "Introduction", |
| "text": "The dominant way of selling ad impressions on ad exchanges (AdXs) is through real-time bidding (RTB) systems, which utilize auctions to allocate user impressions arriving at digital publishers to competing advertisers or intermediaries such as demand-side platforms (DSPs). Most RTB auctions on AdXs are single-unit second-price auctions (SPAs) or single-unit first-price auctions (FPAs). The complexity and scale of available ad inventory, the speed of transactions, and the complex nature of competition imply that advertisers participating in RTB auctions have significant uncertainty about the value of the impressions they are bidding for and the competition they face. Developing accurate and reliable ways of measuring the value of advertising in this environment is essential for advertisers to profitably trade at AdXs and to ensure that acquired ad impressions generate sufficient value. Measurement needs to deliver incremental effects of ads for different types of ad and impression characteristics and needs to be automated. Experimentation thus becomes an attractive way to obtain credible estimates of such causal effects.\nMotivated by this, we present a new approach to perform causal inference on RTB advertising in both SPA and FPA settings. Our approach enables learning heterogeneity in the inferred average causal effects across ad and impression characteristics. The novelty of our approach is in addressing the two main challenges that confront developing a scalable experimental design for RTB advertising.\nThe first challenge is that measuring the average treatment effect () of advertising requires comparing outcomes of users who are exposed to ads with those of users who are not. A difficulty of the RTB setting is that ad exposure is not under complete control of the experimenter because it is determined by an auction. This precludes the use of simpler experimental designs where ad exposure is directly randomized. Instead, we consider a design in which the experimenter controls only an input to the auction, the bid, but wishes to measure the effect of a stochastic outcome induced by this input, ad exposure.\nThe second challenge is managing the cost of experimentation. Obtaining experimental impressions is costly: one has to win an auction to observe the outcome with ad exposure and lose the auction to observe the outcome without it. Costs can be substantial when bidding is not optimized. First, they can emerge from overbidding, in which case the realized cost is high because the paid amount is not compensated by the outcome that is obtained from winning the auction. Second, they can emerge from underbidding, in which case the opportunity cost from losing the auction is high because the outcome that is obtained from losing is lower than the outcome that would have been obtained from paying a high enough bid to win the auction. With potentially millions of auctions in an experiment, suboptimal bidding can make experimentation unviable. Therefore, to be successful an effective experimental design has to deliver inference on the causal effect of ads while also managing the cost of experimentation by implementing a bidding policy that is optimized to the value of the impressions encountered.\nIt is not obvious how to design an experiment that addresses both challenges simultaneously: optimal bidding requires knowing the value of each impression, whose estimation was the goal of experimentation in the first place. Thus, online methods, which introduce randomization to induce exploration of the value of advertising with concurrent exploitation of the information learned to optimize bids, become very attractive in such settings.\nAt the core of these online methods is the need to account for the goal of learning the of ad exposure (henceforth called the advertiser’s “inference goal”) and the goal of learning the optimal bidding policy (henceforth called the advertiser’s “economic goal”) concurrently. The tension is that finding the optimal bidding policy does not guarantee proper estimation of ad effects and vice versa. At one extreme, with a bidding policy that delivers on the economic goal, the advertiser could win most of the time, making it difficult to measure ad effects since outcomes with no ad exposures would be scarcely observed. At the other extreme, with pure bid randomization the advertiser could estimate ad effects and deliver on the inference goal but may end up incurring large economic losses in the process.\nWe contribute by framing the advertiser’s problem as a multi-armed bandit (MAB) problem and introducing a statistical learning framework that address both these considerations. In our design, observed heterogeneity is summarized by a context, , bids form arms, and the advertiser’s monetary auction payoffs form the rewards, so that the best arm, or optimal bid, maximizes the advertiser’s expected payoff from auction participation given . Exploiting the economic structure of SPAs and FPAs, we outline precise conditions to derive the link between the optimal bid at a given , , and the of the ad at the value , or . For SPAs, we show that these two objects are equal, so that the twin tasks of learning the optimal bidding policy and estimating ad effects are perfectly aligned. For FPAs, we demonstrate that the two goals are closely related, though only imperfectly aligned. In both cases, we show that the s are identified from the optimal bids, so that tackling the economic goal, that is, solving the MAB problem, suffices to estimate ad effects in addition to learning the optimal bidding policy.\nTo implement our proposed framework, we present a modified Thompson Sampling (TS) algorithm customized to our auction environment trained online via Markov Chain Monte Carlo (MCMC) methods, which we refer to as Bidding Thompson Sampling (BITS). TS is a Bayesian algorithm, which is an attractive way of tackling this problem because it can easily incorporate prior information and flexibly exploit the shared information across arms and contexts by exploiting the full structure of the data. The algorithm adaptively chooses bids across rounds based on current estimates of which bid is the optimal one. These estimates are updated each round via MCMC through Gibbs sampling with data augmentation and the random walk Metropolis-Hastings algorithm.\nUsing the iPinYou data set, which contains information on RTB auctions, we show through a series of simulation exercises that our proposed algorithm is able to recover the s of advertising and incurs in substantially lower costs of experimentation compared to typical non-adaptive and adaptive approaches. This illustrates the viability of our approach and demonstrates its superior performance against popular competing benchmarks on the economic and inference goals. Importantly, the results indicate that while accounting for correlation in rewards across bids even in a reduced-form way can be helpful in accomplishing the advertiser’s goals, further exploiting the structure of the data is more consequential. This suggests that even simpler versions of BITS, which are easier and faster to implement, can be more attractive than commonly used methods to estimate the effects of advertising.\nTo summarize, the high-level contributions of this paper are twofold. First, it derives explicit mappings between the optimal bids in second-price and first-price auctions and the average treatment effect of ad exposure, thus characterizing the extent of alignment between these objects. It obtains these results by first framing the advertiser’s payoff from auction participation as a function of the potential outcomes associated with ad exposure and then by outlining the conditions under which these mappings hold. Importantly, these objects and mappings account for observed heterogeneity across ad impression opportunities.\nSecond, it demonstrates how these mappings can be leveraged to run experiments to estimate the expected effect of ad exposure while addressing the costs of experimentation. In particular, it introduces a flexible algorithm, based on Thompson Sampling, that uses MCMC methods to accomplish these tasks concurrently by exploiting the alignment between them.\nThe rest of the paper discusses the relationship between our approach and the relevant literature and highlights our contributions relative to existing work. The following section defines the experimenting advertiser’s problem. Section 3 ###reference_### shows how we leverage auction theory to align the advertiser’s objectives and Section 4 ###reference_### describes the data available to this advertiser. Section 5 ###reference_### presents the modified TS algorithm we use to implement our framework. Section 6 ###reference_### displays results documenting the performance of the proposed algorithm and shows its advantages over competing methods. The last section concludes." |
| }, |
| { |
| "section_id": "2", |
| "parent_section_id": null, |
| "section_name": "Problem formulation", |
| "text": "Our goal is to develop a method to measure the effect of exposing users to ads an advertiser buys programmatically on AdXs. To buy an impression opportunity, advertisers need to participate in an auction ran by the AdX. Winning the auction allows advertisers to display their ads to users. We take the perspective of a focal advertiser that experiments to recover the expected effect of exposing users to their ad.\nWe define the advertiser’s goal of estimating the expected effect of displaying their ad to a user as the inference goal. Let be the revenue the advertiser receives when their ad is shown to the user and let be the revenue they receive when their ad is not shown. Thus, the incremental effect of displaying the ad is . All the information the advertiser has about the impression opportunity is captured by a variable .666In our MAB setup, is the context of the auction. It can be obtained from a vector of observable display opportunity variables that can include user, impression, and publisher characteristics. For example, if this vector includes the city where the user is located (New York, Los Angeles, or Chicago), the time of day (morning, afternoon, or evening), and the user’s age (young, middle-aged, or old), then can take 27 values, one of which being, for instance, an indicator for Chicago-evening-young.\nThe advertiser’s inference goal is to estimate conditional average treatment effects, in which exposure to the ad is the treatment. We denote them by . The advertiser needs to estimate this object because they do not have complete knowledge of the distribution of the potential outcomes, and , given . Thus, achieving the inference goal requires the collection of data informative of this distribution.\nA method that accomplishes the inference goal has to address four issues, which we discuss below. All four are generated by the distinguishing feature of the AdX environment that the treatment, exposure to the ad, can only be obtained by winning an auction." |
| }, |
| { |
| "section_id": "3", |
| "parent_section_id": null, |
| "section_name": "Aligning the advertiser’s objectives", |
| "text": "We align the inference and economic goals by leveraging the structure of the advertiser’s bid optimization problem, which shows how these goals are linked. Because bidding depends on the auction format, the extent to which the two goals are aligned differs across auction formats. We show that in SPAs the inference and economic goals can be perfectly aligned, while in FPAs leveraging this linkage is still helpful, but the goals can only be imperfectly aligned.\nTo characterize our approach, we consider the limiting outcome of maximizing the true expected profit function with respect to bids when the joint distribution is known. In what follows, we use the expressions in equations (2 ###reference_###) and (3 ###reference_###) ignoring the second term, , since it does not depend on the advertiser’s bid. This expression also has the benefit of directly connecting the potential outcomes to this auction-theoretic setting, with the treatment effect, , taking the role of the advertiser’s valuation.\nWe can write the objects the advertiser aims to learn in an SPA and an FPA as the maximizers with respect to of, respectively,\nThe expectations in equations (5 ###reference_###) and (6 ###reference_###) are taken with respect to , , and . The conditioning on the distribution is omitted since is the true expected profit function, so it uses the true to compute the relevant expectations and probabilities. We denote the maximizers of these respective expressions by . We ensure that is well-defined and unique with the following assumption.\nProperties of For all : \n(i) The joint distribution admits a density, . \n(ii) , , and . \n(iii) If then there exists an interval such that in which the density of given , , is strictly positive. \n(iv) If the conditional reversed hazard rate of given , , is decreasing in in as defined above.\nAssumption 2 ###reference_um2### gives conditions on required for us to establish the results presented below. These conditions are mild and relatively common in auction models. Assumption 2 ###reference_um2### is standard and made for tractability. In turn, Assumption 2 ###reference_um2### ensures that the expressions given in equations (5 ###reference_###) and (6 ###reference_###) are well-defined.\nAssumption 2 ###reference_um2### is required for us to establish that the optimal bids are unique. Notice that this condition is equivalent to an overlap assumption that , where is the propensity score. Because and , Assumption 2 ###reference_um2### implies that .\nFinally, Assumption 2 ###reference_um2### is only required to determine for FPAs and is a sufficient condition for to be unique. It states that the distribution of conditional on has a decreasing reversed hazard rate in . This property holds for several distributions, including all decreasing hazard rate distributions and increasing hazard rate Weibull, gamma, and lognormal distributions.101010For a comprehensive discussion about reversed hazard rate functions, see, for example, Block et al., (1998 ###reference_b11###).\nWe now investigate the relationship between and under Assumptions 1 ###reference_um1### and 2 ###reference_um2###. We present this relationship first for SPAs and then for FPAs, followed by a discussion about their novelty and implications. To our knowledge, ours was the first paper that obtained these links between optimal bids and s.111111Xu et al., (2016 ###reference_b68###) stated the result of Proposition 1 ###reference_p1### without proving it. In turn, Moriwaki et al., (2020 ###reference_b46###) provided a more general version that encompasses both Propositions 1 ###reference_p1### and 2 ###reference_p2### without formally proving it. After the first version of our paper, Waisman et al., (2019 ###reference_b63###), Bompaire et al., (2021 ###reference_b12###) demonstrated analogous results to ours.\nOptimal bid in SPAs Suppose that Assumptions 1 ###reference_um1###, 2 ###reference_um2###, 2 ###reference_um2###, and 2 ###reference_um2### hold. If the auction is an SPA, then .\nDefine .\nOptimal bid in FPAs Suppose that Assumptions 1 ###reference_um1### and 2 ###reference_um2### hold. If the auction is an FPA, it then follows that ." |
| }, |
| { |
| "section_id": "4", |
| "parent_section_id": null, |
| "section_name": "Accomplishing the advertiser’s objectives", |
| "text": "We leverage Propositions 1 ###reference_p1### and 2 ###reference_p2### to develop a method that concurrently accomplishes the advertiser’s goals. Our proposal is an adaptive approach that learns over a sequence of display opportunities." |
| }, |
| { |
| "section_id": "4.1", |
| "parent_section_id": "4", |
| "section_name": "Data generating process", |
| "text": "We begin by stating the following assumption, which we maintain throughout.\nIndependent and identically distributed (i.i.d.) data and .\nAssumption 3 ###reference_um3### is typically maintained in stochastic bandit problems: that the randomness in payoffs is across occurrences of play. It imposes restrictions on the data generating process (DGP). For instance, if competing bidders solved a dynamic problem because of longer-term dependencies, or budget or impression constraints, could become serially correlated, in which case this condition would not hold.\nA different concern regards the context, . Given recent developments in privacy policies and regulation, we treat each observation as an impression opportunity because these policies can limit and even prevent the use of cookies, which enable advertisers to track the same user over time. This can be problematic since users’ responses can be altered by the number of times they are exposed to the ad.\nIf the same user can be tracked, a simple way of addressing this concern would be to add the number of previous impressions to , but doing so would violate Assumption 3 ###reference_um3### because would then become serially correlated. As we show in Section 5 ###reference_###, our algorithm would ignore this autocorrelation, which would yield a departure from more standard MAB models. A more rigorous approach would incorporate this correlation directly into the algorithm, which is the route Bompaire et al., (2021 ###reference_b12###) followed. We view combining their approach with the algorithm we outline as an interesting avenue for future research.\nIt is important to note that Assumption 3 ###reference_um3### is not required for Propositions 1 ###reference_p1### and 2 ###reference_p2### to hold, which guarantee a well-defined and unique optimal bid across impression opportunities. We maintain this assumption for convenience because it enables us to cast our problem as a nonlinear stochastic bandit problem.\nFurthermore, given our algorithm of choice, we perform Bayesian inference based on the posterior distribution obtained from the data, which, as we argue below, incorporates the correlation in these data, including that of . Therefore, the unit of observation, , can be viewed as a user, and the context, , can contain previous displays, frequencies of displays, as well as other associated variables." |
| }, |
| { |
| "section_id": "4.2", |
| "parent_section_id": "4", |
| "section_name": "Observed data", |
| "text": "Algorithms used to solve MAB problems would base the decision of which bid to place in round , , on a tradeoff between randomly picking a bid to obtain more information about its associated payoff (exploration) and the information gathered until then on the optimality of each bid (exploitation). The information at the beginning of round is a function of all data collected until then, which we denote by . Each observation in these data is an ad auction but can also be a user that can be tracked.\nFor the analysis of SPAs, it is useful to define the variable . We can write for SPAs and for FPAs. In both cases, the s are seeds, independent from all other variables, required for randomization depending on which algorithm is used.\nThese data suffer from two issues. The first, common to both auction formats, is the “fundamental problem of causal inference” (Holland, , 1986 ###reference_b32###): and are never observed at the same time. The second concerns what we observe regarding and differs across the two auction formats. For SPAs, we have a censoring problem that arises from the competitive environment: is only observed when the advertiser wins the auction; otherwise, all they know is that it was larger than . Hence, the observed data have a similar structure to the one from the Type 4 Tobit model as defined by Amemyia, (1984 ###reference_b2###). However, for FPAs this restriction is stronger: we never observe and only have either a lower or upper bound on it depending on whether the advertiser wins the auction, so that the observed data have a Type 5 Tobit model structure." |
| }, |
| { |
| "section_id": "5", |
| "parent_section_id": null, |
| "section_name": "Bidding Thompson Sampling (BITS) algorithm", |
| "text": "We now propose a specific procedure to achieve the advertiser’s goals, which is a version of the TS algorithm. We refer to it as Bidding Thompson Sampling (BITS). We proceed in four steps. First, we given a general description of how the algorithm operates. Second, we outline and discuss the specific parametrization we adopt. Third, we describe how we compute estimates of and . Fourth, we address how one can perform inference on the s. Finally, we briefly discuss the general main challenges to implementing this sort of algorithm in practice. We further discuss several generalizations that can be made and relate them to our current approach in Appendix C ###reference_3###." |
| }, |
| { |
| "section_id": "5.1", |
| "parent_section_id": "5", |
| "section_name": "General procedure", |
| "text": "It is not our goal to solve for or implement the optimal learning policy that minimizes regret over a finite number of rounds of play. To our knowledge, a general optimal solution for contextual MAB problems with arbitrary nonlinear expected rewards as a function of contexts, arms, and parameters, and correlated rewards across contexts and arms, such as the one we consider, is not yet known. Instead, we require an algorithm that can easily accommodate and account for information shared across actions. Hence, we make use of the TS algorithm (Thompson, , 1933 ###reference_b59###), which is a Bayesian heuristic to solve MAB problems.131313See Scott, (2015 ###reference_b56###) for an application to computational advertising and Russo et al., (2018 ###reference_b53###) for a survey.\nThe TS algorithm often starts by parametrizing the distributions of rewards associated with each arm. Since in our problem we treat as continuous and the DGP behind all actions, the distribution , is the same, we choose to parametrize it instead. Denote our vector of parameters of interest by . Expected rewards depend on , so we will often write . The algorithm runs while a criterion, , is below a threshold, . At the end of round , the prior over is updated by the likelihood of all the data, . We denote the number of observations gathered on round by and the total number of observations gathered by the end of round by . If for all , the algorithm proceeds auction by auction. We present it in this way to accommodate batch updates. Given the posterior distribution of given , we compute the posterior expected payoff function, and update the criterion . In round , we place the bid ." |
| }, |
| { |
| "section_id": "5.2", |
| "parent_section_id": "5", |
| "section_name": "Specific parametrization", |
| "text": "Given the structure of the data, our algorithm requires re-implementing a Bayesian estimator of a Type 4 or Type 5 Tobit model on each round. The parametric structure we impose is necessary for the implementation of the algorithm but does not represent formal assumptions we make about the true DGP. Thus, we choose a structure that is flexible enough for us to remain agnostic about the true DGP but that is also sufficiently tractable.\nLet be a -dimensional vector interchangeable with . We assume that:\nwhere . Hence, we use a finite mixture of lognormal distributions for with components and mixture weights and an independent Gumbel distribution with location and scale for . We define , , and . Notice that this parametrization imposes Assumptions 1 ###reference_um1### and 2 ###reference_um2###.\nThese choices of distributions warrant additional comments. We use a mixture of lognormal distributions for solely for convenience. Because these variables are measured in monetary amounts it is attractive to model them in a flexible manner, which this mixture approach allows us to do. In turn, the Gumbel distribution can be motivated by auction and extreme value theories. Consider a setting with random variables. Extreme value theory tells us that the distribution of the maximum of these variables converges to a Fréchet under appropriate mild conditions as . Thus, if we are willing to assume that the advertiser faces many competitors, which is often true in RTB settings (Balseiro et al., , 2015 ###reference_b5###; Balseiro and Gur, , 2019 ###reference_b7###), and that the bids submitted by these advertisers are , which could be justified by an independent private values (IPV) assumption, then the Fréchet could provide a good approximation to the distribution of , so that a Gumbel can approximate that of well. We emphasize that we do not explicitly maintain such assumptions, but rather rely on such conditions to provide some theoretical backing for this parametrization." |
| }, |
| { |
| "section_id": "5.3", |
| "parent_section_id": "5", |
| "section_name": "Computing s and optimal bids", |
| "text": "" |
| }, |
| { |
| "section_id": "5.3.1", |
| "parent_section_id": "5.3", |
| "section_name": "5.3.1 Drawing from posterior distributions", |
| "text": "Our algorithm requires us to obtain draws from the posterior distribution of given at the end of each round . We now provide a brief description of the MCMC procedures we use to obtain such draws. These procedures are presented in detail in Appendix B ###reference_2###.\nObtaining draws of is straightforward but for two details. First, the parameters are not point identified without further assumptions because and are never observed at the same time. While there exist methods that attempt to obtain information about these correlation coefficients in such situations,141414For example, Vijverberg, (1993 ###reference_b61###) exploited the semi-definiteness of to obtain bounds on . we choose to obtain draws of separately from draws of , which can be interpreted as treating the potential outcomes as independent from each other.\nThe reason why we follow this approach is twofold. First, the correlation coefficients are not crucial to our analysis as they do not affect the s. In addition, as equations (2 ###reference_###) and (3 ###reference_###) illustrate, the distributions of rewards do not depend on these coefficients. Second, this approach has precedent in the literature (Chib and Hamilton, , 2000 ###reference_b17###). Nevertheless, in our simulation exercises we consider a DGP in which and are not independent. As we show in Section 6 ###reference_###, ignoring this correlation does not impact our method adversely.\nThe second challenge in obtaining draws of are the missing data. Nevertheless, Assumption 1 ###reference_um1### effectively implies missingness at random, so that drawing the missing values and augmenting the data becomes straightforward. Therefore, the procedure to obtain draws of and from their posterior distributions given the data consists of the typical Gibbs sampling method used to estimate finite mixtures of normal distributions with an additional simple data augmentation step, akin to Albert and Chib, (1993 ###reference_b1###).\nThe procedure to obtain draws of is arguably more complex. Because we cannot obtain conditional conjugacy under the Fréchet distribution, and thus use Gibbs sampling, we obtain draws from the posterior distribution of given the data using a random walk Metropolis-Hastings procedure instead. We implement its standard version using a multivariate normal proposal distribution." |
| }, |
| { |
| "section_id": "5.3.2", |
| "parent_section_id": "5.3", |
| "section_name": "5.3.2 Estimating s", |
| "text": "Given draws from the posterior distribution of given , using (7 ###reference_###) the estimator of at the end of round is:" |
| }, |
| { |
| "section_id": "5.3.3", |
| "parent_section_id": "5.3", |
| "section_name": "5.3.3 Estimating optimal bids", |
| "text": "Using Proposition 1 ###reference_p1###, it follows that for SPAs. As a result, implementing our algorithm in the context of SPAs does not require us to estimate and update . This not only greatly simplifies the method but also highlights its flexibility: bids are treated as continuous, and the only parametric assumption made corresponds to the finite mixture of normal distributions, which is a flexible model that treats the distribution of potential outcomes in a somewhat agnostic fashion.\nFor FPAs, using Proposition 2 ###reference_p2### and (7 ###reference_###) we can write the estimated bidding function at the end of round as:\nso that the optimal bid at the end of round is , where is given in equation (8 ###reference_###). Notice that is strictly increasing and so its inverse is well-defined. It is also important to note that, under FPAs, in practice we need to normalize since we never observe ." |
| }, |
| { |
| "section_id": "5.4", |
| "parent_section_id": "5", |
| "section_name": "Bayesian inference", |
| "text": "Given that our algorithm leverages a Bayesian estimator to solve the MAB problem, it is convenient to use Bayesian inference to assess the uncertainty around the estimates of the s. Under this framework, the uncertainty around these estimates is captured through the posterior probabilities associated with them. This approach does not resort to asymptotic properties or approximations, although standard results can be invoked to establish the typical asymptotic properties of the resulting Bayes estimators.\nOne advantage of adopting Bayesian inference is that it accommodates the serial correlation in the data in a straightforward way through the likelihood function. Let the likelihood function of the data be . We can factorize it as\nGiven how our algorithm works, conditional on all data collected until round , the data from round , , are distributed based on the resulting likelihood function given in (7 ###reference_###). This is because the data collected until round directly determines the advertiser’s own bids and are independent from the remaining random variables drawn in round . Consequently, the autocorrelation implied by the adaptive collection of the data is captured in the posterior distribution." |
| }, |
| { |
| "section_id": "5.5", |
| "parent_section_id": "5", |
| "section_name": "Challenges for practical implementation", |
| "text": "There are three big challenges with implementing this sort of algorithm in practice:\nObserving : In most cases, ad platforms only log realized outcomes conditional on exposure (e.g., clicks). However, as we noted above, there has been little consideration about collecting data about or the potential outcome when a bid was not submitted but the user still pursued a purchase or perhaps even no purchase.\nDelayed feedback: We are also interested in settings the tech industry refers to as “deep conversions” (i.e., purchases) as opposed to what it refers to as “shallow conversions” (i.e., page views). However, purchases generally occur less often than page views and thus this feedback takes time to realize, and it is critical for a dynamic algorithm to have an“estimate” of these decisions in order to proceed with the dynamic allocation and optimization. It can be important to define an attribution model for how exposures, clicks, and conversions relate to one another.\nWe model potential outcomes as representing purchase events or events whose outcomes can be expressed in monetary amounts in order to be compatible with the monetary nature of bidding. In the display ads industry, these may be known as the value of a particular action, so other actions besides purchases may be accommodated as long as they can me measured in monetary amounts.\nThe first challenge may be overcome by defining the appropriate metric to represent the outcome and also by having in place the infrastructure to log the outcomes, contexts, submitted bids, and competitor bid information needed for the algorithm. We expect the algorithm to be used by an advertiser who can define purchase or even conversions rates with respect to its brand as a whole or a subset of its products in a period of time for the campaign of interest; this period of time is usually during the execution of the algorithm (e.g., a week or multiple weeks). This metric should be well defined for and .\nThe second challenge encompasses at least two sub-challenges concurrently: applying an attribution model for exposure to conversion and estimating the conversion rate given the presence of delay. The first issue refers to how to deal with the problem of multiple exposures, multiple clicks, and so on as they relate to conversions. As Chapelle, (2014 ###reference_b15###) explains, one strategy is to use exposures and clicks prior to conversion in a time window; in particular, they mention 30 days as a possibility, where conversions beyond 30 days are ignored, and also the use of a last attribution model to assign the conversion to the last exposure, last click, first conversion, while the rest are disregarded. This may also be applied for the lack of exposure to last click and first conversion within the time window through a timestamp matching exercise. For the issue of delay, the delay distribution along with the conversion rate can be jointly modeled as Chapelle, (2014 ###reference_b15###) discusses. In addition, the weighting scheme proposed in Wang et al., (2022 ###reference_b64###) for implementation in a MAB algorithm can be similarly adapted for the proposed algorithm. The key intuition in Wang et al., (2022 ###reference_b64###) is that the delay distribution may be used to weight the observations to correct for the fact that each observation have different likelihood of converting depending on the delay elapsed so far. Moreover, it should be noted that most TS-MAB algorithms implemented in display advertising do not update the posterior distributions in real-time, and rather use batches where serving is always real-time but reward estimation is done every fixed amount of time (e.g., from 20 to 40 minutes), thus granting time for data preparation. Hence, combining the attribution model with the modeling of delay and conversion for reward estimation can proceed with the selected data.\nFinally, the third challenge requires converting the conversion rates to a monetary metric, which may be achieved by multiplying the conversion rates with the average order value per conversion (or even a look-up table of these values) for the products of the advertiser or the relevant subset in the advertising campaign. More sophisticated approaches may be pursued such as using auxiliary data to estimate the value of a purchase conditional on a conversion, and then embedding these estimates in the profit function along with the conversion rate." |
| }, |
| { |
| "section_id": "6", |
| "parent_section_id": null, |
| "section_name": "Empirical evaluation", |
| "text": "In this section, we evaluate the performance of BITS through simulations based on a DGP calibrated using real RTB data. We first describe how this DGP is calibrated, followed by a description of the methods to which we compare BITS and of the criteria we use to make the comparisons. Finally, we present the results of these exercises." |
| }, |
| { |
| "section_id": "6.1", |
| "parent_section_id": "6", |
| "section_name": "The iPinYou data set and DGP calibration", |
| "text": "We use the iPinYou data set for our evaluation because it is public and often employed in empirical exercises similar to ours, such as Wu et al., (2015 ###reference_b67###), Shan et al., (2016 ###reference_b57###), Zhang et al., (2016 ###reference_b72###) and Tunuguntla and Hoban, (2021 ###reference_b60###). We direct the reader to Zhang et al., (2015 ###reference_b71###) for a detailed description of this data set.\nA challenge we faced was that we could not find an RTB data set that recorded outcomes for advertisers when they lost auctions, that is, a data set that recorded . We believe this further highlights a contribution of this paper: emphasizing non-zero payoffs for advertisers even when they lose an auction, as can be seen in equations (2 ###reference_###) and (3 ###reference_###). Despite being natural and critical from a causal inference perspective, accounting for in our context seems to have been widely ignored.\nFor our empirical evaluation, we use these data to calibrate the DGP, which is the same for SPAs and FPAs, as follows. We choose the milk powder advertiser and ad exchange number 3. We use mutually exclusive indicators for ten cities from the resulting sample as contexts, yielding different contexts, and set these contexts to be equiprobable. We use these probabilities to draw observations of each context on each round of this exercise.\nThe parameters of the distribution of correspond to their MLE estimates. For potential outcomes, we specify a mixture of two lognormal distributions with mixing probability 0.55. The correlations between and are 0.3 and 0.5. Thus, even though our method treats potential outcomes as independent, under the true DGP they are correlated. The s and the expectations of are random draws from a uniform distribution between 0 and 4, with the expectations of computed from these objects. The s of the first (second) component are random draws from a uniform distribution between 1.5 and 2 (0 and 0.5). We assume that the expectations of and are equal for the two components and solve for the s to match these expectations.\nIn our simulations, we consider epochs. Within each epoch, we run the algorithm for rounds, and each round has observations. When running the MCMC methods, we set ." |
| }, |
| { |
| "section_id": "6.2", |
| "parent_section_id": "6", |
| "section_name": "Approaches under consideration", |
| "text": "In addition to BITS, we implement in our simulations additional approaches to evaluate its performance related to both the inference and economic goals. We now briefly outline the other methods we consider. The first three methods rely on a grid with different bids; importantly, this grid always includes the true optimal bid. We provide a more detailed description of the implementation of these alternative approaches in Appendix D ###reference_4###.\nWe implement an A/B test by randomizing with equal probability the bid placed from the grid. The s are estimated by running a regression of on separately for each using the available data. Under pure bid randomization, the estimated slope coefficients from these regressions are consistent for the s due to Assumption 1 ###reference_um1###.\nThe ETC approach proceeds as an A/B test for the first half of the experiment. It then collects all data from this half and, for each , runs a regression of on to estimate the s. For SPAs, this approach places this estimate as the bid for arriving impressions, thus committing to what was learned in the first half of the experiment and to exploitation of that information in the second half. For FPAs, because the and are no longer equal, as Proposition 2 ###reference_p2### demonstrates, this approach computes the sample mean reward associated with each bid and submits the bid with the highest such mean for the second half of the experiment, akin to a greedy algorithm.\nIn this off-the-shelf version of TS, rewards are assumed to follow independent normal distributions across arms for each . The mean and variance of the distribution of arm are updated in the usual way using only observations obtained from pulling arm . For each , it places the estimated optimal bid.\nThis approach uses a reduced-form model for correlation in rewards across arms by specifying that rewards follow a normal distribution whose mean is a cubic polynomial of the bid. The parameters are updated as usual, and the method submits, for each value of the context , the estimated optimal bid.\nThis version of BITS specifies a single lognormal distribution for potential outcomes and for , which facilitates implementation. Optimal bids are computed analogously to the procedure given above.\nWe do not consider greedy algorithms in these simulations because our goal is not only to manage experimentation costs but also to perform inference on the estimated s. The TS methods we consider are Bayesian and output objects we can use to perform inference on the s, whereas greedy methods do not. However, there are greedy algorithms that could be easily applicable in this setting. They can be interpreted as frequentist versions of the TS algorithms we consider, which we address in more detail in Appendix D ###reference_4###." |
| }, |
| { |
| "section_id": "6.3", |
| "parent_section_id": "6", |
| "section_name": "Criteria of comparison", |
| "text": "BITS addresses two goals: minimizing the costs of experimentation and estimating the s. Hence, to compare its performance to the five aforementioned alternative methods, we use two criteria, which assess each of these two goals. We describe them below.\nTo evaluate the performance of these methods in tackling the economic goal, the metric we use is regret. For method , the regret for context at the end of round is ; we have slightly abused notation by incorporating the number of observations into the number of rounds. The subscript indicates that the bid placed by each method can be different, and the randomness in these bids are reflected by the expectation. Finally, total regret is obtained by integrating with respect to : .\nThe MSE represents the inference goal. We compute it by using the final estimates of the s over all the epochs: for context , we estimate the MSE from method using . The term is the estimate of obtained at the end of epoch by method ." |
| }, |
| { |
| "section_id": "6.4", |
| "parent_section_id": "6", |
| "section_name": "Results", |
| "text": "We present the results from our simulations separately for SPAs and FPAs. For ease of illustration, we only show results for two of the ten contexts and present the full sets of results in Appendix E ###reference_5### as they are largely qualitatively similar.\nOur main interest is not on simply comparing the performance of the different methods, but also assessing the role of imposing structure on the method used to estimate s and learn optimal bids. On one end of the spectrum, we have a method with pure randomization and that completely ignores the economic structure of the problem, the A/B test; on the other end, we have BITS, which tackles the inference and economic goals concurrently while exploiting for the full structure of the data and bid optimization problems. The remaining methods are intermediates between these two: ETC alters the A/B test by accounting for regret in the second half of the experiment; TS_OTS accounts for regret from the beginning but still ignores the structure of the problem; TS_CORR adds an extra layer by allowing rewards to be correlated across arms, albeit in a reduced-form way; and BITS_NAIVE explicitly accounts for this structure but using a simpler parametrization. Comparing the performance of these different approaches can help us understand what the most important features are to successfully tackle the economic and inference goals." |
| }, |
| { |
| "section_id": "6.4.1", |
| "parent_section_id": "6.4", |
| "section_name": "6.4.1 Second-price auctions", |
| "text": "We start by analyzing the performance of the different methods in tackling the economic goal. Figure 1 ###reference_### displays the evolution of cumulative regret across rounds averaged across the epochs for ETC, TS_OTS, TS_CORR, BITS_NAIVE, and BITS. We omit the A/B test from this plot as we know that, by construction, its regret is linear, with slope corresponding to that of ETC during the first half of the experiment.\n###figure_1### ###figure_2### Unsurprisingly, BITS has the best performance, being able to virtually eliminate regret as the experiment progresses, which demonstrates that it is able to correctly learn the optimal bid. On the other extreme, and also unexpectedly, ETC’s regret displays a piecewise linear pattern. After the first half of the experiment, the slope approximates zero because the data gathered until then are sufficient to obtain a very precise estimate of the , which, due to Proposition 1 ###reference_p1###, equals the true optimal bid.\nTS_OTS is an intermediate case between ETC and BITS as it tries to handle regret from the beginning of the experiment but, like ETC, does not fully account for the structure of the data, and particularly the relationship between observed and potential outcomes with treatment. Expectedly, TS_OTS performs significantly better than ETC in minimizing regret for the duration of the experiment. In addition, if we augment this method by allowing for correlation in rewards across bids in a reduced-form way, which TS_CORR does, the performance further improves. This improvement, however, is relatively smaller, and varies across contexts, indicating that its scope is dependent upon the underlying DGP.\nFinally, we consider an approach that accounts for the structure of the data but uses a simpler model that does not match the real DGP, BITS_NAIVE. Interestingly, this method approaches the performance of BITS very closely although it is in a sense misspecified. Its convergence shows that it is also able to recover the true optimal bid, which, in this case, coincides with the .\nThis is an important finding for two reasons. First, it suggests that using a simpler model, which is easier to implement and faster, may suffice to accomplish the advertiser’s goals. Second, it highlights that allowing for correlation across arms, as TS_CORR does, is not as important as accounting for the structure of the data for the purposes of minimizing regret.\nWe now proceed to analyze the performance of these methods in tackling the inference goal, that is, estimating the . Figure 2 ###reference_### shows the MSEs associated with the each method for the same contexts shown in Figure 1 ###reference_###.\n###figure_3### ###figure_4### The results are intuitive. The A/B test achieves great performance in obtaining a low MSE as its estimate of the is consistent. The ETC also performs well but worse than the A/B test, which arguably occurs by construction since it uses the same estimator but with fewer observations.\nThe MSE of BITS is virtually equal to that of the A/B test, albeit slightly smaller. This is unsurprising given that BITS imposes the correct parametric specification of the when estimating it, whereas the A/B test uses the OLS estimate.\nIn turn, the MSEs of TS_OTS, TS_CORR, and BITS_NAIVE are somewhat aligned with their respective regrets. Interestingly, the ordering of the MSEs of TS_OTS and TS_CORR depends on the context. Furthermore, even though Figure 1 ###reference_### suggests that these two methods can recover the true optimal bids with sufficient data and thus the s, the precision of these estimates may be low.\nThe naive application of BITS also obtains a comparatively low MSE, although not as low as the more complex application of BITS, the A/B test approach, and ETC. Compounded with the findings of Figure 1 ###reference_###, this suggests that the estimates from BITS_NAIVE may not be as precise as those of these alternative methods." |
| }, |
| { |
| "section_id": "6.4.2", |
| "parent_section_id": "6.4", |
| "section_name": "6.4.2 First-price auctions", |
| "text": "We now perform the same analysis for FPAs. This setting is in a sense more complex because and are no longer equal, as Proposition 2 ###reference_p2### shows, thus requiring a more involved estimator of the latter.\nTo assess the performance of these methods in tackling the advertiser’s economic goal, we consider the evolution of cumulative regret across rounds averaged across the epochs. Figure 3 ###reference_### shows results analogous to those from Figure 1 ###reference_###. Overall, these results are virtually equivalent to those obtained in a setting with SPAs.\n###figure_5### ###figure_6### BITS has the best performance in minimizing regret, but BITS_NAIVE performs similarly well. This is a notable result because this method in a sense misspecifies the underlying DGP in its implementation. As a consequence, it uses the incorrect distribution of when calculating the bid adjustment factor characterized in Proposition 2 ###reference_p2###. Nevertheless, this discrepancy is effectively inconsequential.\nThe alternative adaptive approaches, TS_OTS and TS_CORR, can also correctly identify the optimal bid asymptotically However, allowing for correlation of rewards across arms in a reduced-form fashion may but need not improve their performance depending on the underlying DGP. In turn, ETC displays the same piecewise linear pattern as before. Nevertheless, the decrease in slope is not as pronounced as before.\nWe assess the performance of these methods in estimating the s through MSEs, which Figure 4 ###reference_### presents analogously to from Figure 2 ###reference_###, even though the difference in the scale of MSEs between the two scenarios is noteworthy. However, the displayed MSEs are only those associated with BITS, BITS_NAIVE, the A/B test, and ETC. As Proposition 2 ###reference_p2### shows, under FPAs the s are no longer equal to . Obtaining estimates of these treatment effect parameters requires us to estimate a bid adjustment factor that depends on the distribution of . Consequently, reduced-form methods that ignore the structure of the data, such as TS_OLS and TS_CORR, are unable to estimate the s.\n###figure_7### ###figure_8### As in the SPA setting, BITS performs best than the alternatives since it imposes the correct parametric form of the . Its performance is followed by that of the A/B test, which, in turn, outperforms ETC by construction. Finally, the naive interpretation of BITS yields the highest MSE. Compounded with the results from Figure 3 ###reference_###, this suggests that despite being consistent, the estimator of the from BITS_NAIVE may feature a lower precision than that of these alternative methods." |
| }, |
| { |
| "section_id": "6.4.3", |
| "parent_section_id": "6.4", |
| "section_name": "6.4.3 Summary", |
| "text": "Overall, the results from these exercises illustrate that BITS succeeds in addressing the advertiser’s economic and inference goals. It is able to recover the true optimal bids under both SPAs and FPAs, which allows it to asymptotically eliminate regret, addressing the advertiser’s economic goal. This result combined with the MSE of the estimator of the s from BITS and Propositions 1 ###reference_p1### and 2 ###reference_p2### shows that this estimator is consistent, thus addressing the advertiser’s inference goal.\nNotably, the results also suggest that simpler implementations of BITS can also address these goals concurrently and successfully even if their models are in a sense misspecified, that is, they impose a parametric structure in their implementation that does not coincide with the true underlying DGP. This is a relevant result as their implementation can be substantially easier and faster than that of BITS.\nIn addition, the naive implementation of BITS outperforms alternative adaptive methods, including an example that accounts for correlation in rewards across arms in a reduced-form way, which, in turn, outperforms simpler approaches that focus mostly on causal inference as opposed to managing regret. This suggests that exploiting the full structure of the data when addressing the advertiser’s goals can be as relevant as accounting for such correlation, if not more.\nFinally, it is important to notice that with additional data and assumptions other approaches can be expected to outperform BITS. This could be the case, for example, when one has access to data on clicks and sales, one could impose an explicit connection between displays and sales via clicks using the method and assumptions from Bompaire et al., (2021 ###reference_b12###). BITS is agnostic about such connections and would not exploit such data if they were available. We expect that if the structure a method imposes on these connections is correct, then such method would naturally perform better than BITS." |
| }, |
| { |
| "section_id": "7", |
| "parent_section_id": null, |
| "section_name": "Concluding remarks", |
| "text": "This paper introduced an online method to perform causal inference on RTB advertising. This method leverages the alignment between the optimal bids under sealed-bid SPAs and FPAs with the expected effect of ad exposure to concurrently estimate such effects while accounting for the costs of experimentation. The specific mappings between optimal bids and the expected effect of ad exposure are derived by leveraging auction theory and by carefully outlining the conditions under which they hold.\nSome broader implications of the experimental design beyond RTB advertising are worth mentioning. First, the ideas presented here can be useful in other situations outside of ad auctions where there is a cost to obtain experimental units and where managing these costs is critical for the viability of the experiment. These costs can be driven by the cost of acquiring experimental units (such as sending mailers in offline experiments or other incentives given for experimental participation), as well as the opportunity costs of showing unproven treatments (such as new creatives or new price ranges) to users.\nAnother implication is that, in business experiments, incorporating the firm’s payoff or profit maximization goal into the allocation and acquisition of experimental units is helpful. Examples include pricing experiments and product feature experiments for which there is a clear payoff goal to the firm from such experimentation, which can be incorporated into a MAB problem. Given the burgeoning utilization of experimentation by firms, we believe that leveraging this perspective in business experiments more broadly has value.\nFinally, another takeaway from the proposed approach is that it demonstrates the value of embedding experimental design in the microfoundations of the problem, which enables leveraging economic theory to make progress on running experiments more effectively. This aspect could be utilized in other settings where large-scale experimentation is feasible and where economic theory puts structure on the behavior of agents and associated outcomes. The fusing of experimental design and economic theory in this manner is a direction for our future research." |
| } |
| ], |
| "appendix": [ |
| { |
| "section_id": "Appendix x1", |
| "parent_section_id": null, |
| "section_name": "Appendix", |
| "text": "We now present in more detail the procedure we summarized in Section 5.3.1 ###reference_.SSS1### that obtains draws from the posterior distribution of the parameters given the data. First, we describe the procedure to obtain draws of the parameters of the distributions of potential outcomes. Then, we describe the procedure for those of the distribution of the highest competing bid. These two procedures can be conducted separately because of Assumption 1 ###reference_um1###.\nWe now discuss some practical considerations that arise when implementing the algorithm and ways in which it can be adapted or extended to accommodate variations in the experimentation environment and advertiser goals.\nWe now describe in more detail the alternative methods to estimate s from Section 6.2 ###reference_###. To this end, we consider a grid of bids denoted by with , which always contain the true optimal bid. All the methods are applied separately for each value of the context except for the naive implementation of BITS.\nFigures E.1 ###reference_5.F1### and E.2 ###reference_5.F2### are analogous to Figures 1 ###reference_### and 3 ###reference_###, respectively, and show the evolution of cumulative regret averaged across the epochs for all contexts considered in the simulations. Overall, these results are qualitatively equivalent.\n###figure_9### ###figure_10### In turn, Figures E.3 ###reference_5.F3### and E.4 ###reference_5.F4### are analogous to Figures 2 ###reference_### and 4 ###reference_###, respectively, and show the MSEs of the methods under consideration for all contexts. Once again, the results are qualitatively equivalent.\n###figure_11### ###figure_12### ###figure_13### ###figure_14### ###figure_15### ###figure_16### ###figure_17### ###figure_18### ###figure_19### ###figure_20### ###figure_21### ###figure_22### ###figure_23### ###figure_24### ###figure_25### ###figure_26### ###figure_27### ###figure_28### ###figure_29### ###figure_30###" |
| } |
| ], |
| "tables": {}, |
| "image_paths": { |
| "1(a)": { |
| "figure_path": "1908.08600v4_figure_1(a).png", |
| "caption": "(a) Context 4\nFigure 1: Evolution of cumulative regret for SPAs: contexts 4 and 7", |
| "url": "http://arxiv.org/html/1908.08600v4/extracted/5430826/avg_cum_R_SPA_4.png" |
| }, |
| "1(b)": { |
| "figure_path": "1908.08600v4_figure_1(b).png", |
| "caption": "(b) Context 7\nFigure 1: Evolution of cumulative regret for SPAs: contexts 4 and 7", |
| "url": "http://arxiv.org/html/1908.08600v4/extracted/5430826/avg_cum_R_SPA_7.png" |
| }, |
| "2(a)": { |
| "figure_path": "1908.08600v4_figure_2(a).png", |
| "caption": "(a) Context 4\nFigure 2: MSEs for SPAs: contexts 4 and 7", |
| "url": "http://arxiv.org/html/1908.08600v4/x1.png" |
| }, |
| "2(b)": { |
| "figure_path": "1908.08600v4_figure_2(b).png", |
| "caption": "(b) Context 7\nFigure 2: MSEs for SPAs: contexts 4 and 7", |
| "url": "http://arxiv.org/html/1908.08600v4/x2.png" |
| }, |
| "3(a)": { |
| "figure_path": "1908.08600v4_figure_3(a).png", |
| "caption": "(a) Context 4\nFigure 3: Evolution of cumulative regret for FPAs: contexts 4 and 7", |
| "url": "http://arxiv.org/html/1908.08600v4/extracted/5430826/avg_cum_R_FPA_4.png" |
| }, |
| "3(b)": { |
| "figure_path": "1908.08600v4_figure_3(b).png", |
| "caption": "(b) Context 7\nFigure 3: Evolution of cumulative regret for FPAs: contexts 4 and 7", |
| "url": "http://arxiv.org/html/1908.08600v4/extracted/5430826/avg_cum_R_FPA_7.png" |
| }, |
| "4(a)": { |
| "figure_path": "1908.08600v4_figure_4(a).png", |
| "caption": "(a) Context 4\nFigure 4: MSEs for FPAs: contexts 4 and 7", |
| "url": "http://arxiv.org/html/1908.08600v4/x3.png" |
| }, |
| "4(b)": { |
| "figure_path": "1908.08600v4_figure_4(b).png", |
| "caption": "(b) Context 7\nFigure 4: MSEs for FPAs: contexts 4 and 7", |
| "url": "http://arxiv.org/html/1908.08600v4/x4.png" |
| }, |
| "5": { |
| "figure_path": "1908.08600v4_figure_5.png", |
| "caption": "Figure E.1: Evolution of cumulative regret for SPAs", |
| "url": "http://arxiv.org/html/1908.08600v4/extracted/5430826/avg_cum_R_SPA.png" |
| }, |
| "6": { |
| "figure_path": "1908.08600v4_figure_6.png", |
| "caption": "Figure E.2: Evolution of cumulative regret for FPAs", |
| "url": "http://arxiv.org/html/1908.08600v4/extracted/5430826/avg_cum_R_FPA.png" |
| } |
| }, |
| "validation": true, |
| "references": [ |
| { |
| "1": { |
| "title": "Bayesian analysis of binary and polychotomous response data.", |
| "author": "Albert, J. H. and Chib, S. (1993).", |
| "venue": "Journal of the American Statistical Association,\n88(422):669–679.", |
| "url": null |
| } |
| }, |
| { |
| "2": { |
| "title": "Tobit models: A survey.", |
| "author": "Amemyia, T. (1984).", |
| "venue": "Journal of Econometrics, 24(1–2):3–61.", |
| "url": null |
| } |
| }, |
| { |
| "3": { |
| "title": "Identification of standard auction models.", |
| "author": "Athey, S. and Haile, P. A. (2002).", |
| "venue": "Econometrica, 70(6):2107–2140.", |
| "url": null |
| } |
| }, |
| { |
| "4": { |
| "title": "Reserve price optimization at scale.", |
| "author": "Austin, D., Seljan, S., Moreno, J., and Tzeng, S. (2016).", |
| "venue": "In Zaiane, O. R. and Matwin, S., editors, Proc. 2016 IEEE\nInternat. Conf. on Data Science and Advanced Analytics, pages 528–536.\n(IEEE, New York).", |
| "url": null |
| } |
| }, |
| { |
| "5": { |
| "title": "Repeated auctions with budgets in ad exchanges: Approximations and\ndesign.", |
| "author": "Balseiro, S. R., Besbes, O., and Weintraub, G. Y. (2015).", |
| "venue": "Management Science, 61(4):864–884.", |
| "url": null |
| } |
| }, |
| { |
| "6": { |
| "title": "Contextual bandits with cross-learning.", |
| "author": "Balseiro, S. R., Golrezaei, N., Mahdian, M., Mirrokni, V., and Schneider, J.\n(2019).", |
| "venue": "In Wallach, H., Larochelle, H., Beygelzimer, A., d'Alché-Buc, F., Fox, E., and Garnett, R., editors, Advances in Neural\nInformation Processing Systems 32, pages 9679–9688. (Curran Associates,\nInc., New York).", |
| "url": null |
| } |
| }, |
| { |
| "7": { |
| "title": "Learning in repeated auctions with budgets: Regret minimization and\nequilibrium.", |
| "author": "Balseiro, S. R. and Gur, Y. (2019).", |
| "venue": "Management Science, 65(9):3952–3968.", |
| "url": null |
| } |
| }, |
| { |
| "8": { |
| "title": "Bandits with unobserved confounders: A causal approach.", |
| "author": "Bareinboim, E., Forney, A., and Pearl, J. (2015).", |
| "venue": "In Cortes, C., Lawrence, N., Lee, D., Sugiyama, M., and Garnett, R.,\neditors, Advances in Neural Information Processing Systems 28, pages\n1342–1350. (Curran Associates, Inc., New York).", |
| "url": null |
| } |
| }, |
| { |
| "9": { |
| "title": "Online decision-making with high-dimensional covariates.", |
| "author": "Bastani, H. and Bayati, M. (2020).", |
| "venue": "Operations Research, 68(1):276–294.", |
| "url": null |
| } |
| }, |
| { |
| "10": { |
| "title": "Consumer heterogeneity and paid search effectiveness: A large-scale\nfield experiment.", |
| "author": "Blake, T., Nosko, C., and Tadelis, S. (2015).", |
| "venue": "Econometrica, 83(1):155–174.", |
| "url": null |
| } |
| }, |
| { |
| "11": { |
| "title": "The reversed hazard rate function.", |
| "author": "Block, H. W., Savits, T. H., and Singh, H. (1998).", |
| "venue": "Probability in the Engineering and Informational Sciences,\n12(1):69–90.", |
| "url": null |
| } |
| }, |
| { |
| "12": { |
| "title": "Causal models for real time bidding with repeated user interactions.", |
| "author": "Bompaire, M., Gilotte, A., and Heymann, B. (2021).", |
| "venue": "In Zhu, F., Ooi, B. C., and Miao, C., editors, Proc. of the 27th\nACM SIGKDD Internat. Conf. on Knowledge Discovery & Data Mining, pages\n75–85. (ACM, New York).", |
| "url": null |
| } |
| }, |
| { |
| "13": { |
| "title": "Pure exploration in multi-armed bandits problems.", |
| "author": "Bubeck, S., Munos, R., and Stoltz, G. (2009).", |
| "venue": "In Gavaldà, R., Lugosi, G., Zeugmann, T., and Zilles, S.,\neditors, Internat. Conf. on Algorithmic Learning Theory, pages 23–37.\n(Springer, Berlin).", |
| "url": null |
| } |
| }, |
| { |
| "14": { |
| "title": "Real-time bidding by reinforcement learning in display advertising.", |
| "author": "Cai, H., Ren, K., Zhang, W., Malialis, K., Wang, J., Yu, Y., and Guo, D.\n(2017).", |
| "venue": "In de Rijke, M. and Shokouhi, M., editors, Proc. of the Tenth\nACM Internat. Conf. on Web Search and Data Mining, pages 661–670. (ACM, New\nYork).", |
| "url": null |
| } |
| }, |
| { |
| "15": { |
| "title": "Modeling delayed feedback in display advertising.", |
| "author": "Chapelle, O. (2014).", |
| "venue": "In Macskassy, S. and Perlich, C., editors, Proc. of the 21st ACM\nSIGKDD Internat. Conf. on Knowledge Discovery & Data Mining, pages\n1097–1105. (ACM, New York).", |
| "url": null |
| } |
| }, |
| { |
| "16": { |
| "title": "A/B testing of auctions.", |
| "author": "Chawla, S., Hartline, J., and Nekipelov, D. (2016).", |
| "venue": "arXiv preprint arXiv:1606.00908.", |
| "url": null |
| } |
| }, |
| { |
| "17": { |
| "title": "Bayesian analysis of cross-section and clustered data treatment\nmodels.", |
| "author": "Chib, S. and Hamilton, B. H. (2000).", |
| "venue": "Journal of Econometrics, 97(1):25–50.", |
| "url": null |
| } |
| }, |
| { |
| "18": { |
| "title": "Bridging the gap between regret minimization and best arm\nidentification, with application to A/B tests.", |
| "author": "Degenne, R., Nedelec, T., Calauzènes, C., and Perchet, V. (2019).", |
| "venue": "In Chaudhuri, K. and Sugiyama, M., editors, Proc. of the\nTwenty-Second Internat. Conf. on Artificial Intelligence and Statistics,\npages 1988–1966. (PMLR, Naha).", |
| "url": null |
| } |
| }, |
| { |
| "19": { |
| "title": "Can credit increase revenue?", |
| "author": "Dikkala, N. and Tardos, É. (2013).", |
| "venue": "In Chen, Y. and Immorlica, N., editors, 9th Internat. Conf. on\nWeb and Internet Economics, pages 121–133. (Springer, Berlin).", |
| "url": null |
| } |
| }, |
| { |
| "20": { |
| "title": "Estimation considerations in contextual bandits.", |
| "author": "Dimakopoulou, M., Zhou, Z., Athey, S., and Imbens, G. (2018).", |
| "venue": "arXiv preprint arXiv:1711.07077.", |
| "url": null |
| } |
| }, |
| { |
| "21": { |
| "title": "Test & roll: Profit-maximizing A/B tests.", |
| "author": "Feit, E. M. and Berman, R. (2019).", |
| "venue": "Marketing Science, 38(6):1038–1058.", |
| "url": null |
| } |
| }, |
| { |
| "22": { |
| "title": "Learning to bid without knowing your value.", |
| "author": "Feng, Z., Podimata, C., and Syrgkanis, V. (2018).", |
| "venue": "In Tardos, É., editor, Proc. of the 2018 ACM Conf. on\nEconomics and Computation, pages 505–522. (ACM, New York).", |
| "url": null |
| } |
| }, |
| { |
| "23": { |
| "title": "Real-time bidding with side information.", |
| "author": "Flajolet, A. and Jaillet, P. (2017).", |
| "venue": "In Guyon, I., Von Luxburg, U., Bengio, S., Wallach, H., Fergus, R.,\nVishwanathan, S., and Garnett, R., editors, Advances in Neural\nInformation Processing Systems 30, pages 5163–5173. (Curran Associates,\nInc., New York).", |
| "url": null |
| } |
| }, |
| { |
| "24": { |
| "title": "Counterfactual data-fusion for online reinforcement learners.", |
| "author": "Forney, A., Pearl, J., and Bareinboim, E. (2017).", |
| "venue": "In Precup, D. and Teh, Y. W., editors, Proc. of the 34th\nInternat.l Conf. on Machine Learning, pages 1156–1164. (PMLR, Sidney).", |
| "url": null |
| } |
| }, |
| { |
| "25": { |
| "title": "On explore-then-commit strategies.", |
| "author": "Garivier, A., Kaufmann, E., and Lattimore, T. (2016).", |
| "venue": "In Lee, D., Sugiyama, M., Luxburg, U., Guyon, I., and Garnett, R.,\neditors, Advances in Neural Information Processing Systems 29, pages\n784–792. (Curran Associates, Inc., New York).", |
| "url": null |
| } |
| }, |
| { |
| "26": { |
| "title": "Online evaluation of audiences for targeted advertising via bandit\nexperiments.", |
| "author": "Geng, T., Lin, X., and Nair, H. S. (2020).", |
| "venue": "In Stone, P., editor, Proc. of the Thirty-Fourth AAAI Conf. on\nArtificial Intelligence, pages 13273–13279. (AAAI Press, Palo Alto).", |
| "url": null |
| } |
| }, |
| { |
| "27": { |
| "title": "A linear response bandit problem.", |
| "author": "Goldenshluger, A. and Zeevi, A. (2013).", |
| "venue": "Stochastic Systems, 3(1):230–261.", |
| "url": null |
| } |
| }, |
| { |
| "28": { |
| "title": "Inefficiencies in digital advertising markets.", |
| "author": "Gordon, B. R., Jerath, K., Katona, Z., Narayanan, S., Shin, J., and Wilbur,\nK. C. (2021).", |
| "venue": "Journal of Marketing, 85(1):7–25.", |
| "url": null |
| } |
| }, |
| { |
| "29": { |
| "title": "Confidence intervals for policy evaluation in adaptive experiments.", |
| "author": "Hadad, V., Hirshberg, D. A., Zhan, R., Wager, S., and Athey, S. (2021).", |
| "venue": "arXiv preprint arXiv:1911.02768.", |
| "url": null |
| } |
| }, |
| { |
| "30": { |
| "title": "Learning to bid optimally and efficiently in adversarial first-price\nauctions.", |
| "author": "Han, Y., Zhou, Z., Flores, A., Ordentlich, E., and Weissman, T. (2020a).", |
| "venue": "arXiv preprint arXiv:2007.04568.", |
| "url": null |
| } |
| }, |
| { |
| "31": { |
| "title": "Optimal no-regret learning in repeated first-price auctions.", |
| "author": "Han, Y., Zhou, Z., and Weissman, T. (2020b).", |
| "venue": "arXiv preprint arXiv:2003.09795.", |
| "url": null |
| } |
| }, |
| { |
| "32": { |
| "title": "Statistics and causal inference.", |
| "author": "Holland, P. W. (1986).", |
| "venue": "Journal of the American Statistical Association,\n81(396):945–960.", |
| "url": null |
| } |
| }, |
| { |
| "33": { |
| "title": "Bayesian inference for causal effects in randomized experiments with\nnoncompliance.", |
| "author": "Imbens, G. W. and Rubin, D. B. (1997).", |
| "venue": "Annals of Statistics, 25(1):305–327.", |
| "url": null |
| } |
| }, |
| { |
| "34": { |
| "title": "Mean field equilibria of dynamic auctions with learning.", |
| "author": "Iyer, K., Johari, R., and Sundararajan, M. (2014).", |
| "venue": "Management Science, 60(12):2949–2970.", |
| "url": null |
| } |
| }, |
| { |
| "35": { |
| "title": "A bandit approach to sequential experimental design with false\ndiscovery control.", |
| "author": "Jamieson, K. G. and Jain, L. (2018).", |
| "venue": "In Bengio, S., Wallach, H., Larochelle, H., Grauman, K.,\nCesa-Bianchi, N., and Garnett, R., editors, Advances in Neural\nInformation Processing Systems 31, pages 3664–3674. (Curran Associates,\nInc., New York).", |
| "url": null |
| } |
| }, |
| { |
| "36": { |
| "title": "Real-time bidding with multi-agent reinforcement learning in display\nadvertising.", |
| "author": "Jin, J., Song, C., Li, H., Gai, K., Wang, J., and Zhang, W. (2018).", |
| "venue": "In Cuzzocrea, A., editor, Proc. of the 27th ACM Internat.l Conf.\non Information and Knowledge Management, pages 2193–2201. (ACM, New York).", |
| "url": null |
| } |
| }, |
| { |
| "37": { |
| "title": "Ghost ads: Improving the economics of measuring online ad\neffectiveness.", |
| "author": "Johnson, G. A., Lewis, R. A., and Nubbemeyer, E. I. (2017).", |
| "venue": "Journal of Marketing Research, 54(6):867–884.", |
| "url": null |
| } |
| }, |
| { |
| "38": { |
| "title": "A sequential test for selecting the better variant: Online A/B\ntesting, adaptive allocation, and continuous monitoring.", |
| "author": "Ju, N., Hu, D., Henderson, A., and Hong, L. (2019).", |
| "venue": "In Culpepper, J. S. and Moffat, A., editors, Proc. of the Twelth\nACM Internat. Conf. on Web Search and Data Mining, pages 492–500. (ACM, New\nYork).", |
| "url": null |
| } |
| }, |
| { |
| "39": { |
| "title": "Instrument-armed bandits.", |
| "author": "Kallus, N. (2018).", |
| "venue": "In Janoos, F., Mohri, M., and Sridharan, K., editors, Internat.\nConf. on Algorithmic Learning Theory, pages 529–546. (PMLR, Lanzarote).", |
| "url": null |
| } |
| }, |
| { |
| "40": { |
| "title": "Mechanism design with bandit feedback.", |
| "author": "Kandasamy, K., Gonzalez, J. E., Jordan, M. I., and Stoica, I. (2020).", |
| "venue": "arXiv preprint arXiv:2004.08924.", |
| "url": null |
| } |
| }, |
| { |
| "41": { |
| "title": "Causal bandits: Learning good interventions via causal inference.", |
| "author": "Lattimore, F., Lattimore, T., and Reid, M. D. (2016).", |
| "venue": "In Lee, D., Sugiyama, M., Luxburg, U., Guyon, I., and Garnett, R.,\neditors, Advances in Neural Information Processing Systems 29, pages\n1181–1189. (Curran Associates, Inc., New York).", |
| "url": null |
| } |
| }, |
| { |
| "42": { |
| "title": "Incrementality bidding & attribution.", |
| "author": "Lewis, R. and Wong, J. (2018).", |
| "venue": "SSRN:3129350.", |
| "url": null |
| } |
| }, |
| { |
| "43": { |
| "title": "Monotone treatment response.", |
| "author": "Manski, C. F. (1997).", |
| "venue": "Econometrica, 65(6):1311–1334.", |
| "url": null |
| } |
| }, |
| { |
| "44": { |
| "title": "A theory of auctions and competitive bidding.", |
| "author": "Milgrom, P. R. and Weber, R. J. (1982).", |
| "venue": "Econometrica, 50(5):1089–1122.", |
| "url": null |
| } |
| }, |
| { |
| "45": { |
| "title": "Dynamic online pricing with incomplete information using multi-armed\nbandit experiments.", |
| "author": "Misra, K., Schwartz, E. M., and Abernethy, J. (2019).", |
| "venue": "Marketing Science, 38(2):226–252.", |
| "url": null |
| } |
| }, |
| { |
| "46": { |
| "title": "Unbiased lift-based bidding system.", |
| "author": "Moriwaki, D., Hayakawa, Y., Munemasa, I., Saito, Y., and Matsui, A. (2020).", |
| "venue": "arXiv preprint arXiv:2007.04002.", |
| "url": null |
| } |
| }, |
| { |
| "47": { |
| "title": "Why adaptively collected data have negative bias and how to correct\nfor it.", |
| "author": "Nie, X., Tian, X., Taylor, J., and Zou, J. (2018).", |
| "venue": "In Storkey, A. and Perez-Cruz, F., editors, Proc. of the\nTwenty-First Internat. Conf. on Artificial Intelligence and Statistics,\npages 1261–1269. (PMLR, Lanzarote).", |
| "url": null |
| } |
| }, |
| { |
| "48": { |
| "title": "Reserve prices in internet advertising auctions: A field\nexperiment.", |
| "author": "Ostrovsky, M. and Schwarz, M. (2016).", |
| "venue": "Working paper, Stanford University.", |
| "url": null |
| } |
| }, |
| { |
| "49": { |
| "title": "Causality.", |
| "author": "Pearl, J. (2009).", |
| "venue": "Cambridge University Press.", |
| "url": null |
| } |
| }, |
| { |
| "50": { |
| "title": "Optimizing cluster-based randomized experiments under monotonicity.", |
| "author": "Pouget-Abadie, J., Mirrokni, V., Parkes, D. C., and Airoldi, E. M. (2018).", |
| "venue": "In Guo, Y. and Farooq, F., editors, Proc. of the 24th ACM SIGKDD\nInternat. Conf. on Knowledge Discovery & Data Mining, pages 2090–2099.\n(ACM, New York).", |
| "url": null |
| } |
| }, |
| { |
| "51": { |
| "title": "Optimizing reserve prices for publishers in online ad auctions.", |
| "author": "Rhuggenaath, J., Akcay, A., Zhang, Y., and Kaymak, U. (2019).", |
| "venue": "In Ishibuchi, H. and Zhao, D., editors, 2019 IEEE Conf. on\nComputational Intelligence for Financial Engineering & Economics, pages\n1–8. (IEEE, New York).", |
| "url": null |
| } |
| }, |
| { |
| "52": { |
| "title": "Simple Bayesian algorithms for best arm identification.", |
| "author": "Russo, D. (2020).", |
| "venue": "Operations Research, 68(6):1625–1647.", |
| "url": null |
| } |
| }, |
| { |
| "53": { |
| "title": "A tutorial on Thompson sampling.", |
| "author": "Russo, D., Van Roy, B., Kazerouni, A., Osband, I., and Wen, Z. (2018).", |
| "venue": "Foundations and Trends® in Machine Learning,\n11(1):1–96.", |
| "url": null |
| } |
| }, |
| { |
| "54": { |
| "title": "An experimental investigation of the effects of retargeted\nadvertising: The role of frequency and timing.", |
| "author": "Sahni, N. S., Narayanan, S., and Kalyanam, K. (2019).", |
| "venue": "Journal of Marketing Research, 56(3):401–418.", |
| "url": null |
| } |
| }, |
| { |
| "55": { |
| "title": "Customer acquisition via display advertising using multi-armed bandit\nexperiments.", |
| "author": "Schwartz, E. M., Bradlow, E. T., and Fader, P. S. (2017).", |
| "venue": "Marketing Science, 36(4):500–522.", |
| "url": null |
| } |
| }, |
| { |
| "56": { |
| "title": "Multi-armed bandit experiments in the online service economy.", |
| "author": "Scott, S. L. (2015).", |
| "venue": "Applied Stochastic Models in Business and Industry,\n31(1):37–45.", |
| "url": null |
| } |
| }, |
| { |
| "57": { |
| "title": "Predicting ad click-through rates via feature-based fully coupled\ninteraction tensor factorization.", |
| "author": "Shan, L., Lin, L., Sun, C., and Wang, X. (2016).", |
| "venue": "Electronic Commerce Research and Applications, 16:268–282.", |
| "url": null |
| } |
| }, |
| { |
| "58": { |
| "title": "Competition and crowd-out for brand keywords in sponsored search.", |
| "author": "Simonov, A., Nosko, C., and Rao, J. M. (2018).", |
| "venue": "Marketing Science, 37(2):200–215.", |
| "url": null |
| } |
| }, |
| { |
| "59": { |
| "title": "On the likelihood that one unknown probability exceeds another in\nview of the evidence of two samples.", |
| "author": "Thompson, W. R. (1933).", |
| "venue": "Biometrika, 25(3/4):285–294.", |
| "url": null |
| } |
| }, |
| { |
| "60": { |
| "title": "A near-optimal bidding strategy for real-time display advertising\nauctions.", |
| "author": "Tunuguntla, S. and Hoban, P. R. (2021).", |
| "venue": "Journal of Marketing Research, 58(1):1–21.", |
| "url": null |
| } |
| }, |
| { |
| "61": { |
| "title": "Measuring the unidentified parameter of the extended Roy model of\nselectivity.", |
| "author": "Vijverberg, W. P. M. (1993).", |
| "venue": "Journal of Econometrics, 57(1–3):69–89.", |
| "url": null |
| } |
| }, |
| { |
| "62": { |
| "title": "Multi-armed bandit models for the optimal design of clinical trials:\nBenefits and challenges.", |
| "author": "Villar, S. S., Bowden, J., and Wason, J. M. S. (2015).", |
| "venue": "Statistical Science, 30(2):199–215.", |
| "url": null |
| } |
| }, |
| { |
| "63": { |
| "title": "Online inference for advertising auctions.", |
| "author": "Waisman, C., Nair, H. S., Carrion, C., and Xu, N. (2019).", |
| "venue": "arXiv preprint arXiv:1908.08600.", |
| "url": null |
| } |
| }, |
| { |
| "64": { |
| "title": "Adaptive experimentation with delayed binary feedback.", |
| "author": "Wang, Z., Carrion, C., Lin, X., Ji, F., Bao, Y., and Weipeng, Y. (2022).", |
| "venue": "In WWW ’22: Proceedings of the ACM Web Conference 2022, page\n2247–2255.", |
| "url": null |
| } |
| }, |
| { |
| "65": { |
| "title": "Online learning in repeated auctions.", |
| "author": "Weed, J., Perchet, V., and Rigollet, P. (2016).", |
| "venue": "In Feldman, V., Rakhlin, A., and Shamir, O., editors, 29th\nAnnual Conf. on Learning Theory, pages 1562–1583. (PMLR, New York).", |
| "url": null |
| } |
| }, |
| { |
| "66": { |
| "title": "A multi-agent reinforcement learning method for impression allocation\nin online display advertising.", |
| "author": "Wu, D., Chen, C., Yang, X., Chen, X., Tan, Q., Xu, J., and Gai, K. (2018).", |
| "venue": "arXiv preprint arXiv:1809.03152.", |
| "url": null |
| } |
| }, |
| { |
| "67": { |
| "title": "Predicting winning price in real time bidding with censored data.", |
| "author": "Wu, W. C.-H., Yeh, M.-Y., and Chen, M.-S. (2015).", |
| "venue": "In Cao, L. and Zhang, C., editors, Proc. of the 21st ACM SIGKDD\nInternat. Conf. on Knowledge Discovery & Data Mining, pages 1305–1314.\n(ACM, New York).", |
| "url": null |
| } |
| }, |
| { |
| "68": { |
| "title": "Lift-based bidding in ad selection.", |
| "author": "Xu, J., Shao, X., Ma, J., Lee, Kuang-chih, Q. H., and Lu, Q. (2016).", |
| "venue": "In Stone, P., editor, Proc. of the Thirtieth AAAI Conf. on\nArtificial Intelligence, pages 651–657. (AAAI Press, Palo Alto).", |
| "url": null |
| } |
| }, |
| { |
| "69": { |
| "title": "Estimation bias in multi-armed bandit algorithms for search\nadvertising.", |
| "author": "Xu, M., Qin, T., and Liu, T.-Y. (2013).", |
| "venue": "In Burges, C. J. C., Bottou, L., Welling, M., Ghahramani, Z., and\nWeinberger, K. Q., editors, Advances in Neural Information Processing\nSystems 26, pages 2400–2408. (Curran Associates, Inc., New York).", |
| "url": null |
| } |
| }, |
| { |
| "70": { |
| "title": "A framework for Multi-A(rmed)/B(andit) testing with online\nFDR control.", |
| "author": "Yang, F., Ramdas, A., Jamieson, K. G., and Wainwright, M. J. (2017).", |
| "venue": "In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R.,\nVishwanathan, S., and Garnett, R., editors, Advances in Neural\nInformation Processing Systems 30, pages 5957–5966. (Curran Associates,\nInc., New York).", |
| "url": null |
| } |
| }, |
| { |
| "71": { |
| "title": "Real-time bidding benchmarking with iPinYou dataset.", |
| "author": "Zhang, W., Yuan, S., Wang, J., and Shen, X. (2015).", |
| "venue": "arXiv preprint arXiv:1407.7073.", |
| "url": null |
| } |
| }, |
| { |
| "72": { |
| "title": "Bid-aware gradient descent for unbiased learning with censored data\nin display advertising.", |
| "author": "Zhang, W., Zhou, T., Wang, J., and Xu, J. (2016).", |
| "venue": "In Krishnapuram, B. and Shah, M., editors, Proc. of the 22nd ACM\nSIGKDD Internat. Conf. on Knowledge Discovery & Data Mining, pages\n665–674. (ACM, New York).", |
| "url": null |
| } |
| } |
| ], |
| "url": "http://arxiv.org/html/1908.08600v4", |
| "new_table": {} |
| } |