diff --git "a/20240318/2310.03173v2.json" "b/20240318/2310.03173v2.json" new file mode 100644--- /dev/null +++ "b/20240318/2310.03173v2.json" @@ -0,0 +1,1060 @@ +{ + "title": "\u212c-Coder: Value-Based Deep Reinforcement Learning for Program Synthesis", + "abstract": "Program synthesis aims to create accurate, executable programs from problem specifications, specifically from natural language descriptions in our context.\nRecent studies have leveraged the power of reinforcement learning (RL) in conjunction with large language models (LLMs), significantly enhancing code generation capabilities. The application of RL focuses on directly optimizing for functional correctness, offering an advantage over conventional supervised methods.\nDespite policy-based RL methods dominating the literature on RL for program synthesis, the nature of program synthesis tasks hints at a natural alignment with value-based methods.\nThis stems from the rich collection of off-policy programs, including those developed by human programmers and also historical samples, coupled with the straightforward verification of generated programs through automated unit testing, meaning rewards are easy to obtain.\nDiverging from the dominant use of policy-based algorithms, our work explores the feasibility of value-based approaches, leading to the development of our -Coder (pronounced Bellman coder).\nYet, training value-based methods presents challenges due to the enormous search space inherent to program synthesis.\nTo this end, we introduce an initialization protocol for RL agents utilizing pre-trained LMs and a conservative Bellman operator to reduce training complexities.\nMoreover, we demonstrate how to leverage the learned value functions as a dual strategy to post-process generated programs.\nOur empirical evaluations demonstrated -Coder\u2019s capability in achieving state-of-the-art performance when compared to policy-based methods.\nRemarkably, this achievement is reached with minimal reward engineering effort, highlighting the effectiveness of value-based RL, independent of reward designs.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Program synthesis (or code generation) aims to create functionally accurate executable programs from problem specifications, such as input-output (IO) examples (Summers, 1977 ###reference_b83###; Gulwani et al., 2012 ###reference_b34###), constraint-based (Osera & Zdancewic, 2015 ###reference_b66###; Frankle et al., 2016 ###reference_b28###) or natural language descriptions (Hendrycks et al., 2021 ###reference_b38###; Austin et al., 2021 ###reference_b7###), among others. The increasing attention towards this field can be attributed to its potential in transforming the software development paradigm. Notably, AI-powered tools have shown evidence of boosting efficiency within the software industry.\nLarge language models (LLMs) (Brown et al., 2020 ###reference_b16###; OpenAI, 2023 ###reference_b65###; Anil et al., 2023 ###reference_b5###; Chowdhery et al., 2022 ###reference_b21###; Rae et al., 2021 ###reference_b71###; Hoffmann et al., 2022 ###reference_b40###; Touvron et al., 2023 ###reference_b85###) have garnered substantial interest and shown remarkable achievements. The scheme of pre-training on vast amounts of data has yielded notable successes in natural language generation. This trend extends its influence to program synthesis, where numerous specialized code LLMs (Li et al., 2023 ###reference_b52###; 2022 ###reference_b53###; Nijkamp et al., 2022 ###reference_b64###; Zheng et al., 2023 ###reference_b100###; Fried et al., 2022 ###reference_b29###; Chen et al., 2021a ###reference_b18###; Wang et al., 2021 ###reference_b89###; 2023 ###reference_b90###; Xu et al., 2023 ###reference_b96###; Rozi\u00e8re et al., 2023 ###reference_b76###) have been introduced to address challenges in program synthesis.\nUnlike many free-form natural language generation tasks, where the quality of model\u2019s output is hard to assess, the correctness of synthesized programs can be verified through automated execution with predefined unit tests. This allows for directly optimizing execution outcomes through reinforcement learning (RL), by formulating test outcomes as reward signals.\nOur discussion focuses on recent RL-based works (Le et al., 2022 ###reference_b50###; Shojaee et al., 2023 ###reference_b78###; Liu et al., 2023 ###reference_b55###) that have achieved remarkable advancements in Python text-to-code generation, evaluated on the challenging benchmarks sourced from Codeforces programming contests\n(Hendrycks et al., 2021 ###reference_b38###; Li et al., 2022 ###reference_b53###)\nNotably, these works predominantly favor on-policy policy-based algorithms.\nWhile (on-policy) policy-based methods are favored in existing program synthesis works, they are known to be sample inefficient (Nachum et al., 2017 ###reference_b61###; Gu et al., 2016 ###reference_b32###) due to their inability to use off-policy samples.\nIn contrast, value-based methods, using temporal difference learning, are known to be more sample-efficient (Gu et al., 2016 ###reference_b32###; Nachum et al., 2017 ###reference_b61###; Liu et al., 2020 ###reference_b56###), as they solve a fixed-point iteration which does not explicitly require a specific data distribution, hence offering better compatibility with off-policy data.\nWe defer the technical explanations on on/off-policy data and reasons for the different efficiency to Section 3.2 ###reference_###, where we have notations and definitions ready.\nIn program synthesis, the primary sources of off-policy data include human programs and previously synthesized programs. Both are off-policy as they do not follow the sequence distribution induced by the current model.\nCurrent program synthesis works often directly use off-policy samples with on-policy methods. Unsurprisingly, Shojaee et al. (2023 ###reference_b78###) notices that an increase in off-policy synthetic programs may degrade performance. This occurs as off-policy data lead to biased gradient estimates. Ideally, an objective should be to enhance or at least sustain performance as data volume grows.\nTo summarize, the reasons that suggest a natural fit for value-based methods in program synthesis are twofold: the availability of (inexpensive) rewards,\nsimilar to classical RL tasks like GO and Atari; and the principle compatibility with off-policy data for effectively leveraging human and historical data.\nHowever, value-based RL faces challenges such as difficulty in converging in large state-action spaces. To this end, we introduce -Coder (Bellman coder), with our contributions being threefold:\nWe stabilize value-based RL for program synthesis by proposing an initialization protocol for -functions and a conservative Bellman operator to mitigate the training complexities.\nWe demonstrate how to leverage value functions as a dual strategy to improve generation.\n-Coder achieves strong empirical performance with minimal reward engineering, providing further insights of RL algorithm design independent of reward function designs.\nPaper structure.\nWe introduce related works and notations in Section 2 ###reference_### and 3 ###reference_###. Section 4 ###reference_### details our method and the rationale behind our design choices. Specifically, Sections 4.1 ###reference_###, 4.2 ###reference_###, and 4.3 ###reference_### address the challenges of value function training by: leveraging task structure, providing effective -function initialization, and a conservative operator for stable yet less ambitious updates, respectively. Section 4.5 ###reference_### shows an additional benefit of value functions, and Section 5 ###reference_### shows our empirical results." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related works", + "text": "Execution-guided program synthesis. The feasibility of verifying programs through test case outcomes has led to the line of execution-guided works (Chen et al., 2018 ###reference_b19###; Zohar & Wolf, 2018 ###reference_b105###; Chen et al., 2021b ###reference_b20###).\nWhile these efforts leverage execution feedback, they do not directly optimize towards higher execution success rate due to the inherent non-differentiability of execution outcomes.\nRL for general sequence modeling. Supervised LM training, using next token predictions (NTP) or masked language modeling (Kenton & Toutanova, 2019 ###reference_b47###), has recognized limitations. One prominent issue is the exposure bias: given that the training is done in a \u201cteacher-forcing\u201d manner (Bengio et al., 2015 ###reference_b12###; Ranzato et al., 2015 ###reference_b74###), errors tend to accumulate during testing due to auto-regressive generation.\nIn contrast, prior works (Ranzato et al., 2015 ###reference_b74###; Rennie et al., 2017 ###reference_b75###) have demonstrated the efficacy of RL in addressing exposure bias and optimizing non-differentiable metrics, e.g. BLEU (Papineni et al., 2002 ###reference_b68###) and ROUGE (Lin, 2004 ###reference_b54###), by leveraging automatic scoring as reward function.\nRL for program synthesis. Supervised losses also fall short when assessing the functional accuracy of synthesized programs (Hendrycks et al., 2021 ###reference_b38###; Chen et al., 2021a ###reference_b18###). As such, relying solely on supervised learning for program synthesis is not ideal.\nAs RL provides a pathway to directly optimize non-differentiable objectives, plentiful work (Zhong et al., 2017 ###reference_b101###; Simmons-Edler et al., 2018 ###reference_b81###; Ellis et al., 2019 ###reference_b27###; Wang et al., 2022 ###reference_b88###) have studied enhancing code generation through RL.\nFor the works most related to ours:\nCodeRL (Le et al., 2022 ###reference_b50###) adapted REINFORCE (Williams, 1992 ###reference_b93###), a classic policy gradient (PG) algorithm, along with the baseline trick for variance reduction and a supervise-trained reward model to alleviate the issue of sparse execution signals. In addition, they proposed a critic sampling strategy to refine and repair program based on the example unit tests feedback.\nPPOCoder (Shojaee et al., 2023 ###reference_b78###) applied proximal policy gradient (Schulman et al., 2017 ###reference_b77###, PPO) to fine-tune pre-trained LMs. In addition, they leverage the syntactic and semantic structure of code, such as syntax trees (Rabinovich et al., 2017 ###reference_b69###) and data-flow graphs (Yasunaga & Liang, 2020 ###reference_b97###), to improve reward function designs.\nRLTF (Liu et al., 2023 ###reference_b55###) proposed an online training framework for program synthesis using policy gradient with heursitically-designed fine-grained rewards.\nAdditional discussions.\nAppendix D ###reference_### lists several RL applications, showing the analogies between program synthesis and tasks that benefit from value-based methods.\nIn C ###reference_###, we extend the discussion on works that extend policy-based methods to an off-policy setting. Such attempts often involve training a value function, further highlighting our motivation for starting with value-based methods." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Preliminaries", + "text": "One could formulate the program synthesis task as a sequence-to-sequence generation task, where a model takes a problem description as input and outputs a program which aims to achieve the functionality specified by . A generated program is composed by a sequence of tokens . For brevity, we use constant to denote\nthe sequence length although it could be a variable in practice,\nand to denote a program in general (both generated and ground truth).\nLet lm be an instance of LM, be the logits layer (language modelling head) output, and be the probabilistic distribution over the vocabulary (computed by passing through softmax), conditioned on a sequence and .\nSuppose is a ground truth program and is the train set, conventionally\nLMs\ncould be trained by minimizing the cross-entropy loss" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "RL Notations", + "text": "To make notations easier to interpret, we bridge program synthesis notations to standard RL ones. RL problems are typically formulated as Markov Decision Processes (MDPs) and an MDP is often composed by a 5-tuple which are state space, action space, transition function, reward function and discount factor, respectively. The discount factor discounts future values to emphasize the near futures, and we use (which slightly prefers more concise solution). A (stochastic) transition function is a distribution over conditioned on a state-action pair . In program synthesis, is trivial as , where denotes concatenation.\nState and action. In code generation context, an action is a token . Hence the action space is the vocabulary .\nAs the information used to generate token is , the state is hence defined as .\nFor a given , the state space . For brevity, we will mainly use rather than the notations, and sometimes omit the time index if it leads to no confusion. We will also use to denote whenever only the relative temporal position matters.\nPolicy. A policy \nassigns an action distribution to any state \n, meaning predicting a token based on current sequence and the problem specification .\nPrior works often define and directly optimize LM parameters with PG methods.\nWe however define to be a function of and other components , see details in Section 4 ###reference_###.\nReward function. A reward function determines reward of taking action at state . We follow the reward design of Le et al. (2022 ###reference_b50###) in equation 2 ###reference_###.\nWe may also use shorthand notation . Note that the reward is determined when the program is\ncompleted\nat . Thus if otherwise defined as equation 2 ###reference_###.\nValue functions. RL maximizes the discounted returns, . The state-action value function and the state value function , are defined recursively as:\nwhere .\nIn addition, the advantage function is ." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Value-based RL and Dueling DQN", + "text": "Value-based\nalgorithms especially the -learning family (Watkins & Dayan, 1992 ###reference_b92###; Mnih et al., 2013 ###reference_b59###; Van Hasselt et al., 2016 ###reference_b86###; Bellemare et al., 2017 ###reference_b10###) have achieved remarkable successes. A canonical framework of the -learning family iterates between policy evaluation and policy improvement:\nwhere is an arbitrary dataset, the PE step estimates the previous policy using the Bellman equation (Bellman, 1966 ###reference_b11###), and the PI step finds an improved by maximizing estimates.\nIn particular, we build our framework on top of Dueling DQN (Wang et al., 2016 ###reference_b91###, DDQN). In a nutshell, DDQN approximates and with separate heads, and run improvement and evaluation steps with .\nThis bifurcation enables a robust estimation of \nwithout conflating with the actions, which subsequently ensures a stable learning of given that it focuses solely on the relative values.\nAs a consequence, DDQN often exhibits enhanced stability in training dynamics and improved generalization.\nIn addition to the prior mentioned advantages,\nDDQN enables us to leverage\na task structure that ground truth programs should attain highest advantages, therefore reducing the searching space, which we will elaborate on in Section 4.1 ###reference_###.\nRemarks on sample efficiency. We illustrate the inefficiency of policy-based methods using vanilla PG as an example. PG maximizes , with gradient computed using the policy gradient theorem. This method requires training data drawn from the distribution induced by current policy , hence called on-policy. Therefore, one should in principle generate new data and discard historical data at every update, leading to undesired sample inefficiency.\nIn contrast, policy evaluation as in equation 5 ###reference_### works with arbitrary dataset ." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Algorithmic Designs - Accelerating Value-Based Training", + "text": "###figure_1### While value-based RL holds great promise, its training can be challenging due to the large action space and the high-dimensional state space . This leads to a notably large -table of size . And the cardinality of policy space is , which grows doubly exponentially.\nBoth challenges from large action spaces and high-dimensional state spaces are pivotal research topics in RL. The action space challenges are discussed by e.g. Dulac-Arnold et al. (2015 ###reference_b26###); Tavakoli et al. (2018 ###reference_b84###); Kalashnikov et al. (2018 ###reference_b45###), while He et al. (2016 ###reference_b37###); Nair et al. (2018 ###reference_b62###), among others, considered the state spaces complexities. In particular, Silver (2015 ###reference_b79###); Duan et al. (2016 ###reference_b25###) commented on that the potentially better training stability of policy-based methods in these scenarios.\nTo address the challenges inherent in training value-based\nRL\nfor LMs, at a high level, we developed -Coder considering three key aspects:\nincorporation of task structure,\ninitialization of -function, and\nbackup using a conservative Bellman operator.\nFigure 1 ###reference_### previews the effectiveness of our algorithmic designs, which shows the training curve of different value-based RL algorithms on the APPS dataset. Due to aforementioned challenges, the performance of the vanilla DDQN continuously decreases even evaluated on the training set. In contrast, both the -function initialization and the conservative Bellman operator show benefits in stabilizing and accelerating the training process.\nFor notational convenience in subsequent sections, we begin with an overview of our notations and parameterizations, summarized in Figure 2 ###reference_###. Figure 2 ###reference_###(a) denotes a pre-trained encoder-decoder LM parameterized by (where subscript ckpt denotes the fact it\u2019s a checkpoint/constant).\nFigure 2 ###reference_###(b) and (c) show the forward graphs of our two different training stages: (b) corresponds to a pre-training stage for , to provide a good initialization for (c) the subsequent fine-tuning of . Motivations and details are deferred to Section 4.2 ###reference_### and 4.3 ###reference_###, respectively.\nAs we proceed to the rationale behind our designs, it is encouraged to maintain familiarity with , , and their corresponding products, especially the forward paths to and , to prevent confusion in the subsequent sections." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Leveraging Task Structures", + "text": "###figure_2### ###figure_3### ###figure_4### ###figure_5### As noted earlier, a key attribute of program synthesis task\nis the provision of human solutions, which are guaranteed to be correct. As a result, these solutions should attain the highest -values, even if the correct solutions might not be unique. As such, for a ground truth program ,\n holds for all , hence .\nTo enforce this structure, one could ensure and , where we abuse the notation and by letting . It ensures that has advantages that are roughly the highest.\nTo this end, suppose is a general neural network, we decompose as follows,\nIt enforces our first condition that . For the second condition , we optimize an advantage function by minimizing an auxiliary advantage loss function, namely ,\nWe also cap the -function with , the maximum total rewards. See Appendix G ###reference_### for details." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "-function Initialization", + "text": "Despite the task structures introduced, training the -function from scratch remains extremely challenging. While this is not a problem for policy-based learning (given that directly fine-tune pre-trained LMs without requiring a -function at all), it presents significant challenges in value-based approaches because one often does not have a pre-trained -function. To this end, we show that one could initialize a -function from the logits output of a pre-trained LM.\nInitialization of via pre-trained models. Yu & Zhang (2023 ###reference_b98###) considered the fine-tuning of RL agents after offline RL pre-training.\nTheir main idea is to reconstruct a -function from the pre-trained policy, for fine-tuning.\nDrawing inspiration from their approach, one could similarly reconstruct/initialize a -function using a pre-trained LM, akin to using a pre-trained policy.\nThis initialization was motivated by the energy-based policy line of works (Haarnoja et al., 2017 ###reference_b35###; 2018 ###reference_b36###), where a policy is the product of passing a -function through a softmax transfer function. Analogously, in LMs, - the distribution over - is produced by passing logits through softmax.\nwhere is a temperature hyper-parameter.\nOne could naturally set for initialization. Hence, with aforementioned dueling structure in equation 7 ###reference_###\nand our pre-defined parameterization, one could set the advantage function as , leading to .\nSee also our forward pass graph defined in Figure 2 ###reference_###b.\nIn a nutshell, this -function produces a policy identical to the output distribution of ,\nwhere and .\nRecalling equation 5 ###reference_### - 6 ###reference_###, the -learning family can be viewed as iterations between policy evaluation and improvement. We now elaborate on how this function initialization affects both steps.\nPolicy improvement. One could, informally, consider the operation of taking softmax with respect to as a soft policy improvement (Haarnoja et al., 2018 ###reference_b36###) step with a temperature .\nTherefore, equation 11 ###reference_### can be interpreted as: running soft policy improvement alone with this initialized preserved the performance of pre-trained , offering a good starting point of online fine-tuning.\nPolicy evaluation. Yet, this function only captures relative values, since we initialized only the advantages - the relative information - as shown in equation 11 ###reference_###. can thereby be an arbitrary function. This would not affect the policy improvement step due to the translation invariance of the softmax function.\nHowever, during the policy evaluation step, see e.g. equation 5 ###reference_###, the Bellman error can be heavily influenced by the -values. When the -values is the dominant source of error, the policy evaluation optimization could be largely driven by the state-only -values. This can lead to a loss of the relative action values, that we intended to preserve in the previous step.\nPre-training of . This can be addressed by adding a pre-training phase of , during which we freeze the advantage function and train by minimizing the temporal difference error (or equivalently doing policy evaluation). In this stage, we optimize the following loss until convergence\nwhere sg is a stop gradient operator, follows standard semi-gradient optimization,\n is a target action (details deferred to section 4.3 ###reference_###), and .\nIn summary, our initialization steps ensures that, prior to fine-tuning , our meets two important conditions: it starts with the action distribution of a pre-trained , and it begins with low temporal difference error (because the pre-training of in equation 12 ###reference_### directly minimizes it)." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "A Conservative Bellman Operator", + "text": "With a pre-trained state value function , we are now ready to learn a good state-action value function via fine-tuning.\nWe parameterize , where we define , and we initialize in a way such that and .\nIt ensures that on initialization, a good starting point for subsequent fine-tuning on .\nTechnically speaking, setting is not required, as one could finetune both and . We however observed that finetuning a residual head , with frozen, leads to better stability.\nAlthough we avoid training from scratch, optimizing by -learning family algorithms can still be challenging.\nWe attribute this to the characteristics of the Bellman optimality operator that seeks to learn the optimal value function and optimal policy , which requires a good data coverage of the state-action space (e.g. Jiang & Huang, 2020 ###reference_b44###; Xie et al., 2021a ###reference_b94###; Zhan et al., 2022 ###reference_b99###).\nIn program synthesis, however, such assumption can hardly be met due to the large state-action space and the high computational costs of Transformer inference.\nWhile conventional -learning family relies on the operator , recent works in RL, especially those considering limited data regime (e.g. Agarwal et al., 2020 ###reference_b4###; Levine et al., 2020 ###reference_b51###), often design \u201cconservative\u201d operators (e.g. Achiam et al., 2017 ###reference_b2###; Kumar et al., 2020 ###reference_b49###; Brandfonbrener et al., 2021 ###reference_b14###) to address difficulties led by .\nConservative Bellman operators.\nThe concept behind conservative Bellman operators is to \u201caim low\u201d.\nInstead of learning the optimal and , these operators typically seeks to learn a policy that either surpasses a behavior policy (which is used to collect a RL dataset in offline RL literature, see e.g. Achiam et al., 2017 ###reference_b2###; Brandfonbrener et al., 2021 ###reference_b14###) or fine-tune a pre-existing policy (e.g. Xie et al., 2021b ###reference_b95###; Yu & Zhang, 2023 ###reference_b98###).\nThis is often achieved by introducing a regularizer that penalizes deviations from the behavior/pre-existing policy. In particular, as shown in equation 14 ###reference_###, we define our conservative Bellman operator , which depends on a fixed, pre-defined policy , as follows:\nThe intuition behind our operator is\nthat we evaluate the action-value function of a greedified policy , where is the indicator function.\nThe rationale behind greedification is that can be seen as in a greedy-decoding mode, which usually has better (one-shot) capability than sampling mode (although the latter has better generation diversity).\nConsidering setting , the operator seeks to learn a policy that outperforms .\nWe further comment on some properties of : proposition 4.1 ###reference_theorem1### shows is a contraction, meaning there is an unique fixed point. It leads to proposition 4.2 ###reference_theorem2###, motivating our development of Section 4.5 ###reference_###.\nis -contraction in norm.\nGiven our conservative Bellman operator, we could define our conservative temporal difference loss,\nwhere , and ." + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "Implementation and Optimization", + "text": "Architecture and parameterization recap.\nFollowing (Le et al., 2022 ###reference_b50###; Shojaee et al., 2023 ###reference_b78###; Liu et al., 2023 ###reference_b55###), we choose T5 (Raffel et al., 2020 ###reference_b73###) as our base architecture for , and ; and is initialized with CodeRL checkpoint which is publicly available.\nSpecifically, , and share a same encoder, and the encoder is frozen throughout, to reduce the amount of learnable parameters.\nTwo-stage training. As noted earlier, our training are composed with two stages: a pre-training stage of , namely -stage, and a fine-tuning stage of , namely -stage. A pseudo-algorithm could be found in Appendix A ###reference_###. In addition, further implementation details are deferred to Appendix H ###reference_###.\n-stage: Given our development of Section 4.2 ###reference_###, we pre-train function using stochastic gradient descent with , with defined in equation 12 ###reference_###.\n-stage (fine-tuning):\nIn this stage, we seek to optimize to minimize our previously developed losses: and , as defined in equation 8 ###reference_### and 15 ###reference_###, respectively. In addition, it is also a common practice to include a cross-entropy loss during fine-tuning. Therefore, we conclude our final loss function as equation 17 ###reference_###, and is updated using stochastic gradient descent with ." + }, + { + "section_id": "4.5", + "parent_section_id": "4", + "section_name": "A Free Reward Model", + "text": "Reward modeling is crucial in language modeling and also in inverse RL (detailed discussions could be found in Appendix C ###reference_###). An intriguing finding from IRL, applicable to our framework, is that a trained -function can recover a reward function without additional training. Analogously to Garg et al. (2021 ###reference_b31###), an one-to-one correspondence between and reward holds with our conservative Bellman operator . We define the inverse conservative Bellman operator ,\nThe inverse conservative Bellman operator is a bijection.\nProposition 4.2 ###reference_theorem2### shows that a is uniquely corresponding to a reward function .111We use and to name our recovered reward model and the real reward function, respectively. Given the definition of we could recover a reward model with without additional training:\nWe use the estimation in practice, with reasons deferred to Appendix F ###reference_###.\nCandidates selection with . We leverage our reward model to do candidate programs selection, as an example to highlight the additional benefits of value-based RL.\nWe rank generated programs by the cumulative rewards , predicted by our reward model , to select the programs that are most likely to be correct. Specifically, for pass@ metrics, we follow the evaluation protocol used in CodeT (Chen et al., 2022 ###reference_b17###), a work that considered program selection via automatic generated tests. This protocol computes pass@ by first generating programs and select a subset of programs to evaluate pass@. In our case, we select the -sized subset with top- highest from total candidates. Our results in Section 5 ###reference_### follow this evaluation protocol.\n###figure_6### Remarks on .\nTo further explain the motivation of ranking with , consider a realistic deployment setting where a fine-tuned model is deployed for end-user applications. Users often provide a language description of their needs but may not include test cases (which can also be challenging for beginners or casual users). Additionally, the model is usually required to offer a single best response instead of a range of options. Therefore, the ability to rank programs without true rewards is a desirable advantage.\nTo preview the effectiveness of , we show the correlation between environmental reward and our cumulative reward . In Figure 3 ###reference_###,\ngreen region\ncorresponds to correct programs, and has the highest on average.\nFor incorrect programs, those with compile\nand runtime errors have\nthe lowest and the second lowest , respectively.\nPrograms\ncan be executed but fail some tests, have the second highest .\nHence, it\nconcludes that has an evident positive correlation to the true reward ." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Empirical Evaluation", + "text": "Sampling using . Nucleus sampling (top- sampling) (Holtzman et al., 2019 ###reference_b41###) with sampling temperature222Sampling temperature is different from temperature in equation 10 ###reference_###. They can be different values. (Ackley et al., 1985 ###reference_b3###) has been one of the most important sampling techniques. It can also be easily implemented in our framework. One could simply consider as logits and the sampling procedure would remain identical to standard LMs, see Appendix B ###reference_### for details.\nAPPS benchmark and baselines. In line with prior RL-based works (Le et al., 2022 ###reference_b50###; Shojaee et al., 2023 ###reference_b78###; Liu et al., 2023 ###reference_b55###), we evaluate -Coder on the challenging code contests benchmark APPS (Hendrycks et al., 2021 ###reference_b38###). It contains 5,000 training and 5,000 testing problems, with three difficulty levels: introductory, interview and competition. We compare our -Coder with pre-trained or supervise fine-tuned LLM baselines: GPT2 (Radford et al., 2019 ###reference_b70###), GPT3 (Brown et al., 2020 ###reference_b16###), GPT-Neo (Black et al., 2021 ###reference_b13###), GPT-J (Wang & Komatsuzaki, 2021 ###reference_b87###), Codex (Chen et al., 2021a ###reference_b18###) and AlphaCode (Li et al., 2022 ###reference_b53###); and RL fine-tuned baselines: CodeRL (Le et al., 2022 ###reference_b50###), PPOCoder (Shojaee et al., 2023 ###reference_b78###) and a concurrent work RLTF (Liu et al., 2023 ###reference_b55###).\nAPPS: without example test outcomes. In the APPS dataset, each problem has several example unit tests (different from the hidden unit tests used for evaluation).\nThese example tests are usually leveraged to refine generated samples.\nFor example,\nCodeRL and RLTF considers a critic sampling (CS) strategy that refines and repairs generated programs based on the execution outcomes of the example tests.\nWe start with experiments results in which example test outcomes are not used (hence CodeRL and RLTF results in Table 1 ###reference_### are without CS).\nTable 1 ###reference_### shows that our -Coder has overall the best pass@ for and achieves second best place for (best result reported by the concurrent work RLTF).\nFor Table 1 ###reference_### results, we use nucleus sampling with a sampling temperature of . We set to for and to for ,\nwhere is a hyper-parameter of our ranking protocol introduced in Section 4.5 ###reference_### (see Appendix I ###reference_### for an ablation study on ).\ntableAPPS results when using example test outcomes.\n\n\n\n\n\n\nModel\nPass@1\nPass@5\n\nIntro\nInter\nComp\nAll\nIntro\nInter\nComp\nAll\n\nCodex filtered\n22.78\n2.64\n3.04\n6.75\n24.52\n3.23\n3.08\n7.46\n\nAlphaCode filtered\n-\n-\n-\n-\n14.36\n5.63\n4.58\n7.17\n\nCodeRL cs\n6.77\n1.80\n0.69\n2.57\n15.27\n4.48\n2.36\n6.21\n\nCodeRL filtered\n16.27\n6.00\n4.27\n7.71\n-\n-\n-\n-\n\nCodeRL cs+filtered\n16.52\n6.16\n4.15\n7.83\n24.49\n8.58\n7.82\n11.61\n\nRLTF cs\n8.40\n2.28\n1.10\n3.27\n18.60\n5.57\n3.70\n7.80\n\n-Coder filtered\n18.00\n6.63\n2.30\n8.04\n23.30\n8.83\n6.40\n11.30\nAPPS: using example test outcomes.\nTable 5 ###reference_### lists the results using example tests.\nIn addition to the CS strategy that uses example tests to refine/repair programs,\nLi et al. (2022 ###reference_b53###)\nand Chen et al. (2021a ###reference_b18###) consider a filtered setting, in which\nprograms failing example tests are excluded, and the pass@ is evaluated using (a subset of) programs that pass example tests\n(which is also related to the metric (Li et al., 2022 ###reference_b53###), the pass rate using submissions from samples).\nWe also test -Coder in this filtered setting.\nSimilarly, we first exclude programs that fail example tests. Suppose out of programs pass; we then follow our ranking protocol to get top- out of programs for evaluation.\n-Coder outperforms baselines with either CS or filtered setting for .\nThe baseline, CodeRL+CS+filtered, incorporated both strategies achieved a slight advantage over -Coder for pass@ while being surpassed by -Coder for pass@.\nIt worth mentioning that\nCS is a plug-and-play component, which could also be combined with -Coder, to further improve pass rate.\nFor the results in Table 5 ###reference_###, we use a temperature of and set to , matching the used in Le et al. (2022 ###reference_b50###).\ntableGeneralization to CodeRL. Pass@ evaluated with top- ranked programs, generated by CodeRL. indicates absolute improvement achieved by ranking, compared to un-ranked pass@.\n\n\n\n\n\nk\nTemp.\nPass@k\n\nIntro\nInter\nComp\nAll\n\n1\n0.4\n6.30 1.91\n1.27 0.37\n0.50 0.37\n2.12 0.68\n\n0.6\n6.00 2.13\n1.23 0.42\n0.50 0.36\n2.04 0.75\n\n5\n0.4\n9.30 -0.2\n2.10 0.01\n0.70 0.15\n3.26 0.00\n\n0.6\n10.20 0.58\n2.57 0.41\n0.80 0.16\n3.74 0.39\ntableZero-shot pass@ on MBPP. indicates absolute improvement achieved by ranking. \n\n\n\n\nTemp.\nk=1\nk=5\nk=10\nk=80\n\n0.7\n20.13 6.61\n37.04 5.61\n44.45 4.63\n64.00 1.41\n\n0.8\n18.89 6.99\n36.59 7.21\n44.46 6.59\n65.20 4.28\n\n0.9\n17.32 7.34\n35.04 8.58\n43.15 8.22\n63.20 4.33\nGeneralization ability.\nIn addition, we test the generalization ability of our dual strategy, ranking with . We study two aspects: generalization to other models and generalization to different domains. To this end,\nwe designed the following experiments, which confirmed its generalizability in positive.\nFor the former, we generate (off-policy) programs using CodeRL (with ), and rank those programs by . Table 5 ###reference_### shows our ranking strategy leads to improvements in most cases, even though the programs to be ranked are not generated by -Coder.\nFor the latter, we test our dual strategy with another dataset MBPP (Austin et al., 2021 ###reference_b7###) (with ). Table 5 ###reference_### shows consistent improvements for all temperatures and ." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this work, we explore the feasibility of value-based RL algorithms for program synthesis task. We demonstrate how to stabilize and accelerate training through -function initialization and conservative updates. Moreover, our work is conducted with minimal reward engineering effort,\nthereby placing an emphasis on the perspective of algorithm designs.\nWhile policy-based algorithms remain mainstream in the current program synthesis literature,\nthe question of how to effectively leverage off-policy programs, including historical synthetic samples, in a principled way, might still be under-explored.\nWe are convinced that value-based RL offers a promising direction to address this question, and thereby to scale RL for code generation at large by (re)-using the extensive collection of off-policy programs. Our work could thus serve as an important initial step towards this direction." + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Pseudo-Code for Training", + "text": "" + }, + { + "section_id": "Appendix 2", + "parent_section_id": null, + "section_name": "Appendix B Pseudo-Code for Sampling", + "text": "" + }, + { + "section_id": "Appendix 3", + "parent_section_id": null, + "section_name": "Appendix C Additional Related Works", + "text": "Off-policy policy-based methods.\nOne string of off-policy policy-based methods is based on importance ratio. Suppose the data is collected by a behavior policy , PG with off-policy data can be corrected by . This allows unbiased gradient even though the data distribution is off-policy. However, computing the ratio is not always feasible as the density function of off-policy data, such as human data, is often unknown. In addition, this correction can lead to high variance due to the product of ratios along trajectories.\nWhile vanilla importance-weighted off-policy PG does not require the approximation of value functions, some advanced ratio-based methods often incorporate value functions, such as (Imani et al., 2018 ###reference_b42###; Liu et al., 2020 ###reference_b56###).\nAnother viable approach is the direct combination of value-based and policy-based methods, often referred to as the actor-critic framework, e.g. (Konda & Tsitsiklis, 1999 ###reference_b48###; Degris et al., 2012 ###reference_b23###). Although actor-critic methods are often conisdered as the third category, besides policy-based and value-based, we and some other works (Fujimoto et al., 2018 ###reference_b30###) lean towards categorizing actor-critic to be more value-based, as the major difficulty lies in value function approximations.\nNevertheless, both directions of extending policy-based methods to an off-policy setting, largely rely on the value functions. This emphasizes the motivation and significance of our work.\nReward modeling and beyond. Due to the successes of reinforcement learning from human/AI feedback (Christiano et al., 2017 ###reference_b22###; Bai et al., 2022b ###reference_b9###). Reward modeling and RL fine-tuning with learned reward model has been a popular choice for post-SFT (supervised fine-tuning) refinement (see e.g. Ziegler et al., 2019 ###reference_b103###; Stiennon et al., 2020 ###reference_b82###; Bai et al., 2022a ###reference_b8###; Ouyang et al., 2022 ###reference_b67###).\nIn particular, in program synthesis, Le et al. (2022 ###reference_b50###) trains a classifier, that predicts unit test outcomes, as their reward model for RL fine-tuning. However, reward models can sometimes be expensive to train and their quality can heavily impact RL fine-tuning performance. Recent works (e.g. Rafailov et al., 2023 ###reference_b72###; Diao et al., 2023 ###reference_b24###) explore preference learning beyond conventional reward model.\nModeling reward function, on the other hand, has been a long-lasting topic in inverse RL or imitation learning (IRL or IL, see e.g. Ng et al., 2000 ###reference_b63###; Abbeel & Ng, 2004 ###reference_b1###; Ziebart et al., 2008 ###reference_b102###; Ho & Ermon, 2016 ###reference_b39###). While conventional IRL/IL often iterates between reward model fitting and RL training stages, recent IL works (Jacq et al., 2019 ###reference_b43###; Garg et al., 2021 ###reference_b31###) also explore beyond explicitly reward modeling to reduce training instability and optimization difficulty, led by the iterative optimization scheme. Specifically, Garg et al. (2021 ###reference_b31###) leverages the one-to-one correspondence between -function and reward model, given the soft Bellman operator, to eliminate the reward fitting step.\nCandidate selection in program synthesis. Existing works have shown one could improve program pass rate by filtering out programs that are likely to be incorrect. For instance, Chen et al. (2021a ###reference_b18###) filtered out programs that cannot pass example unit tests given in doc-strings, and Chen et al. (2022 ###reference_b17###) filtered out programs that cannot pass generated unit tests.\nFurthermore, reward models are also often used to rank candidate programs (see e.g. Gulcehre et al., 2023 ###reference_b33###; Touvron et al., 2023 ###reference_b85###)." + }, + { + "section_id": "Appendix 4", + "parent_section_id": null, + "section_name": "Appendix D A Spectrum of RL Applications", + "text": "###figure_7### To conceptually demonstrate the differences between policy-based and value-based methods, and why program synthesis might be well-suited to value-based approaches, Figure 4 ###reference_### presents a spectrum of RL applications. It could be observed that in scenarios where rewards are not expensive to evaluate or there\u2019s plenty of off-policy data (data not generated by the current policy/model) value-based methods tend to be preferred.\nConsider, for instance, InstructGPT (Ouyang et al., 2022 ###reference_b67###) (policy-based) and AlphaGo (Silver et al., 2016 ###reference_b80###) (value-based). The former relies on human annotators (expensive) to label model-generated (on-policy) responses, while the latter obtains rewards from simulators (cheap), and leverages (1) human expert games (off-policy) during training and (2) re-using historical games (off-policy) through experience reply.\nTable 2 ###reference_### provides explanations our application plot of Figure 4 ###reference_###. Applications in games typically find it easy to obtain rewards and make extensive use of off-policy data, e.g human games or historical replays. Conversely, InstructGPT obtains its rewards from preferences labeled by human annotators, with the data predominantly generated by the GPT model itself. The self-driving application notable has high cost of gathering rewards, due to the risks of real-world driving. While existing driving data could be utilized, Kendall et al. (2019 ###reference_b46###) specifically choose not to use pre-collected data, leading to their choice of a policy-based algorithm.\nIn code generation, despite the availability of cheap rewards and the existing collection of off-policy programs, whether human-written or historical synthetic programs, current literature leans towards policy-based methods. We believe that value-based methods could be a promising direction, given their similarity to tasks with simulators." + }, + { + "section_id": "Appendix 5", + "parent_section_id": null, + "section_name": "Appendix E Reward Engineering Comparison", + "text": "Table 3 ###reference_### shows that ours has the least reward engineering effort. Note that our reward model is directly derived from , and is not used for training.\nTable 4 ###reference_### shows the results when only basic reward function (defined in equation 2 ###reference_###) is used, under no example test outcomes setting. CodeRL and RLTF results are duplicated from their reports." + }, + { + "section_id": "Appendix 6", + "parent_section_id": null, + "section_name": "Appendix F Advantage of Approximate Version of", + "text": "Recap that our recovered reward is computed by\nImagining a scenario in which we sample/decode using a trained , the forward pass will compute and for each timestep, because of our dueling architecture. But will not be evaluated during generation, because is only used when computing . Computing the exact version will require additional computation of during generation. In contrast, and are already computed during generation, therefore it requires almost no additional computation to compute ." + }, + { + "section_id": "Appendix 7", + "parent_section_id": null, + "section_name": "Appendix G Additional Implementation Tricks", + "text": "Given our reward design in equation 2 ###reference_###, the cumulative reward is upper bounded by .\nWe enforce by transform the state value function as , where is a soft absolute function. Given , enforcing leads to .\nIn section 4.3 ###reference_###, we initialize in a way such that and . The former can be done by simply loading the checkpoint . Adding a residual head , that initialized to output zeros, can be done with a simple trick. One can simply add two heads and , let be trainable, and be fixed for subsequent fine-tuning, setting achieves the desired functionality." + }, + { + "section_id": "Appendix 8", + "parent_section_id": null, + "section_name": "Appendix H Training and Evaluation Details", + "text": "In supplement to implementation details in Section 4.4 ###reference_### and 5 ###reference_###, we give more low-level details here.\nAPPS dataset. In addition to the train/test split details described in Section 5 ###reference_###, APPS datast, on average, consists of 2 example unit tests, 21 hidden unit tests, and 23 ground truth programs. We follow the same procudure as Hendrycks et al. (2021 ###reference_b38###); Le et al. (2022 ###reference_b50###) to construct prompts for both training and evaluation. Specifically, see Section 3 of Hendrycks et al. (2021 ###reference_b38###).\nMBPP dataset. MBPP has 974 instances with a 374/90/500 train/val/test splits and, in addition, 10 problems reserved for few-shot learning. Because we only do zero-shot evaluation on MBPP, only the 500 test problems are used for evaluation. Each problem of MBPP usually comes with three unit tests. In addition, these tests are usually not hidden. Therefore, prior works Le et al. (2022 ###reference_b50###); Shojaee et al. (2023 ###reference_b78###); Liu et al. (2023 ###reference_b55###) often explicitly incorporate the tests into prompt string. We follow WizardCoder (Luo et al., 2023 ###reference_b58###) to construct our input format. Details could be found in this repo ###reference_main/WizardCoder###.\nPre-trained model. We initialize our model with CodeRL checkpoint publicly available at here ###reference_###, meaning we initialize , , and from it.\nNote that we freeze encoder for both -stage and -stage, therefore the encoder is shared during both training and generation. For both training and generation, we set the maximum length to 600 and 512 for source and target sequences, respectively.\nTraining data preparation. While we use to represent our training dataset, yet we have not elaborated on how it is constructed. In general, we follow the protocol of prior RL-based works that combining all ground truth programs and a set of programs generated by the pre-trained model, for each problem . Specifically, we generate 256 programs per problem using pre-trained checkpoint. Combined with ground truth programs, there are, on average, 278 programs per problem.\nMini-batch preparation. By prior definition, our dataset now contains both ground truth programs and generated programs.\nNotably, the volume of generated programs is significantly larger than that of the ground truth programs. This means that if one were to randomly sample from the dataset, generated programs would dominate the mini-batches.\nTo address this, when preparing a mini-batch, we sample ground truth programs and generated programs, where is batch size.\n-stage training. In the -stage, we pre-train state-value function . We conduct our experiment with 4A100-80G GPUs. Specifically, we use batch size of 16 for each GPU and gradient accumulation step of 4, resulting in a total batch size of 256. For optimizer and scheduler, we use AdamW optimizer (Loshchilov & Hutter, 2018 ###reference_b57###) with a constant learning rate of 1e-5 and a weight decay of 0.05. We train for 18k gradient steps.\n-stage training. In the -stage, we conduct our experiment with 8A100-80G GPUs. Specificaly we use batch size of 16 for each GPU and gradient accumulation step of 1, resulting in a total batch size of 128. For optimizer and scheduler, we use AdamW with a peak learning rate 3e-5, a weight decay of 0.05, and a linear decay scheduler with no warmup. We train for 10k gradient steps.\nOther hyper-parameters. We set the ground truth data ratio and the energy-based policy temperature (see equation 10 ###reference_###) for all experiments. In -stage, we use and ." + }, + { + "section_id": "Appendix 9", + "parent_section_id": null, + "section_name": "Appendix I Ablation on", + "text": "###figure_8### Table 5 ###reference_### conduct an ablation study on ranking budgets , it can be observed that our ranking strategy achieves consistent improvements under different budgets ." + }, + { + "section_id": "Appendix 10", + "parent_section_id": null, + "section_name": "Appendix J Comments on Properties", + "text": "\u220e\nThe proof is similar to Lemma C.3. in Garg et al. (2021 ###reference_b31###).\nTo prove that is a bijection, it suffices to show that for any , there exists a unique such that .\nNote that by proposition 4.1 ###reference_theorem1###, there exists a unique that satisfies .\nRearranging the terms gives .\nThis completes the proof.\n\u220e" + }, + { + "section_id": "Appendix 11", + "parent_section_id": null, + "section_name": "Appendix K Discussion on Limitations", + "text": "While being exploratory, our work admits certain limitations including: additional frozen parameters introduced, and we observe that raw performance (without ranking) is mixed compared to CodeRL (see Table 5 ###reference_###) (which we believe is somewhat excusable as we use less reward designs). However, we remark the effectiveness of our overall framework including the dual strategy is non-trivial, especially with limited reward engineering.\nIt is also informative to show results filtered by the true environmental reward function , instead of results ranked by our recovered reward function .\nAlthough filtering with requires using hidden tests, meaning it cannot be implemented in realistic settings, also see discussions in Section 4.5 ###reference_###.\nHowever, it could serve as an upper limit for our ranking strategy and as a sanity check.\n(Roughly speaking, if , the pass rate of -ranking and -filtering would be identical.)\nTo this end, we use the same set of candidate programs as those in Table 1 ###reference_###, but apply the ground truth reward function to filter candidates rather than using for ranking.\nThe corresponding results in Table 5 ###reference_### show that, although -ranking is effective, there remains a large room for improvement." + } + ], + "tables": { + "1": { + "table_html": "
\n
Table 1: Empirical evaluation on APPS test set. , and indicates results duplicated from\u00a0Le et\u00a0al. (2022), Shojaee et\u00a0al. (2023) and Liu et\u00a0al. (2023), respectively. Bold number indicates the best result and underlined number means our result are the second best. Intro, inter and comp stand for introductory, interview and competition, respectively.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Model \n\n\n# trainable\n\nparameters\n Pass@1Pass@5Pass@1000
IntroInterCompAllIntroInterCompAllIntroInterCompAll
Codex\n12B4.140.140.020.929.650.510.092.2525.023.703.237.87
AlphaCode\n1B--------17.675.247.068.09
GPT3\n175B0.200.030.000.06--------
GPT2\n0.1B1.000.330.000.402.700.730.001.02----
GPT2\n1.5B1.300.700.000.683.601.030.001.3425.009.278.8012.32
GPT-Neo\n2.7B3.900.570.001.125.500.800.001.5827.909.8311.4013.76
GPT-J\n6B5.601.000.501.829.201.731.003.0835.2013.1513.5117.63
RL based methods - without using example unit tests
CodeRL\n770M6.201.500.302.209.391.900.423.1035.3013.3313.6017.78
PPOCoder\n770M5.201.000.501.749.102.501.203.5635.2013.3513.9017.77
RLTF\n770M4.160.970.201.4510.122.650.823.7838.3015.1315.9019.92
\n-Coder\n \n\n\n770M/stage\n 333For both and -stage, our model trains a decoder and heads, i.e. 770M trainable params per stage.\n6.701.500.302.3010.402.630.703.8037.0013.6712.6018.12
\n
\n
", + "capture": "Table 1: Empirical evaluation on APPS test set. , and indicates results duplicated from\u00a0Le et\u00a0al. (2022), Shojaee et\u00a0al. (2023) and Liu et\u00a0al. (2023), respectively. Bold number indicates the best result and underlined number means our result are the second best. Intro, inter and comp stand for introductory, interview and competition, respectively. " + }, + "2": { + "table_html": "
\n
Table 2: Summary of RL applications.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ReferencesType of RLCosts of Getting RewardsAvailable Off-Policy Data
Atari(Mnih et\u00a0al., 2013)value\ncheap: simulator\nextensive: history/human games
GO(Silver et\u00a0al., 2016)value\ncheap: simulator\nextensive: history/human games
Poker\n\n\n\n(Morav\u010d\u00edk et\u00a0al., 2017)\n\n(Brown & Sandholm, 2018)\n \nvalue444While Poker AI often uses counterfactual regret minimization\u00a0(Zinkevich et\u00a0al., 2007), which isn\u2019t strictly reinforcement learning, the shared principle of estimating action values allows us to categorize it under value-based methods.\ncheap: simulator\nextensive: history/human games
StarCraft II(Arulkumaran et\u00a0al., 2019)value\ncheap: simulator\nextensive: history/human games
InstructGPT(Ouyang et\u00a0al., 2022)policy\nexpensive: human annotators\nlimited: mostly model-generated data
\n\n\n\nImage\n\nCaption\n \n\n\n\n\n(Ranzato et\u00a0al., 2015)\n\n(Rennie et\u00a0al., 2017)\n \npolicy\ncheap: automatic metrics\nlimited: mostly model-generated data
Self-driving(Kendall et\u00a0al., 2019)policy\nexpensive: driving in real-world\nlimited: mostly model-generated data
\n\n\n\nCode\n\nGeneration\n \n\n\n\n\n(Le et\u00a0al., 2022)\n\n(Shojaee et\u00a0al., 2023)\n\n(Liu et\u00a0al., 2023)\n \npolicy\ncheap: unit testing\nextensive: collection of human programs
\n
\n
", + "capture": "Table 2: Summary of RL applications." + }, + "3": { + "table_html": "
\n
Table 3: Comparison of reward designs
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
RewardRemarkOursCodeRLRLTFPPOCoder
Basicequation\u00a02\n
Reward Modellearned reward model
Fine-Grainedfine-grained error type & location of error
Adaptiveratio of passed tests
Syntactic Correctnesscompilable
Syntactic Matchingsyntactic similarity to ground truth
Semantic Matchingsemantic similarity to ground truth
\n
\n
", + "capture": "Table 3: Comparison of reward designs" + }, + "4": { + "table_html": "
\n
Table 4: Performance with only basic reward (equation\u00a02). and indicates results duplicated from\u00a0Le et\u00a0al. (2022) and Liu et\u00a0al. (2023), respectively.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelPass@1Pass@5
IntroInterCompAllIntroInterCompAll
CodeRL\n4.601.100.201.627.101.570.402.44
RLTF\n---1.37---3.50
\n-Coder6.701.500.302.3010.402.630.703.80
\n
\n
", + "capture": "Table 4: Performance with only basic reward (equation\u00a02). and indicates results duplicated from\u00a0Le et\u00a0al. (2022) and Liu et\u00a0al. (2023), respectively." + }, + "5": { + "table_html": "
\n
Table 5: Pass@ results are evaluated with greedy decoded programs, and pass@ are computed by sampled programs using a temperature of 0.4.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Pass@CodeRL\n-Coder
11.601.60
53.282.88
507.167.35
1008.769.18
\n
\n
\\captionof\n\ntableRanking with compared with filtering with real environmental reward function , i.e. hidden tests. -ranked results are duplicated from Table\u00a01.\n\n\n\n\n-ranked\n-filtered\n\n\npass@1\npass@5\n\nIntro\n6.70\n10.40\n26.60\n\nInter\n1.50\n2.63\n7.87\n\nComp\n0.30\n0.70\n5.10\n\nAll\n2.30\n3.80\n11.06\n\n
\n
\n
", + "capture": "Table 5: Pass@ results are evaluated with greedy decoded programs, and pass@ are computed by sampled programs using a temperature of 0.4." + } + }, + "image_paths": { + "1": { + "figure_path": "2310.03173v2_figure_1.png", + "caption": "Figure 1: \nTraining curves on APPS train set. \u25a0\u25a0\\!\\blacksquare\\!\u25a0 denotes \u212c\u212c{\\mathcal{B}}caligraphic_B-Coder, \u22c6\u22c6\\!\\star\\!\u22c6 removes our conservative operator, and \u25bc\u25bc\\!\\blacktriangledown\\!\u25bc is \u212c\u212c{\\mathcal{B}}caligraphic_B-Coder without both our operator and initialization.", + "url": "http://arxiv.org/html/2310.03173v2/x1.png" + }, + "2(a)": { + "figure_path": "2310.03173v2_figure_2(a).png", + "caption": "Figure 2: \n(a) A forward graph of conventional enc-dec LMs, with a\ncheckpoint \u03b8ckptsubscript\ud835\udf03ckpt{\\theta_{\\text{ckpt}}}italic_\u03b8 start_POSTSUBSCRIPT ckpt end_POSTSUBSCRIPT, where p\ud835\udc5dpitalic_p is a distribution over \ud835\udc9c\ud835\udc9c\\mathcal{A}caligraphic_A and \u2113\u2113\\ellroman_\u2113 denotes logits\n;\n(b) Our forward graph for pre-training \u03d5italic-\u03d5\\phiitalic_\u03d5;\n(c) Our forward graph for fine-tuning \u03b8\ud835\udf03\\thetaitalic_\u03b8.\n indicates a frozen/constant component.", + "url": "http://arxiv.org/html/2310.03173v2/x2.png" + }, + "2(b)": { + "figure_path": "2310.03173v2_figure_2(b).png", + "caption": "Figure 2: \n(a) A forward graph of conventional enc-dec LMs, with a\ncheckpoint \u03b8ckptsubscript\ud835\udf03ckpt{\\theta_{\\text{ckpt}}}italic_\u03b8 start_POSTSUBSCRIPT ckpt end_POSTSUBSCRIPT, where p\ud835\udc5dpitalic_p is a distribution over \ud835\udc9c\ud835\udc9c\\mathcal{A}caligraphic_A and \u2113\u2113\\ellroman_\u2113 denotes logits\n;\n(b) Our forward graph for pre-training \u03d5italic-\u03d5\\phiitalic_\u03d5;\n(c) Our forward graph for fine-tuning \u03b8\ud835\udf03\\thetaitalic_\u03b8.\n indicates a frozen/constant component.", + "url": "http://arxiv.org/html/2310.03173v2/x3.png" + }, + "2(c)": { + "figure_path": "2310.03173v2_figure_2(c).png", + "caption": "Figure 2: \n(a) A forward graph of conventional enc-dec LMs, with a\ncheckpoint \u03b8ckptsubscript\ud835\udf03ckpt{\\theta_{\\text{ckpt}}}italic_\u03b8 start_POSTSUBSCRIPT ckpt end_POSTSUBSCRIPT, where p\ud835\udc5dpitalic_p is a distribution over \ud835\udc9c\ud835\udc9c\\mathcal{A}caligraphic_A and \u2113\u2113\\ellroman_\u2113 denotes logits\n;\n(b) Our forward graph for pre-training \u03d5italic-\u03d5\\phiitalic_\u03d5;\n(c) Our forward graph for fine-tuning \u03b8\ud835\udf03\\thetaitalic_\u03b8.\n indicates a frozen/constant component.", + "url": "http://arxiv.org/html/2310.03173v2/x4.png" + }, + "2(d)": { + "figure_path": "2310.03173v2_figure_2(d).png", + "caption": "Figure 2: \n(a) A forward graph of conventional enc-dec LMs, with a\ncheckpoint \u03b8ckptsubscript\ud835\udf03ckpt{\\theta_{\\text{ckpt}}}italic_\u03b8 start_POSTSUBSCRIPT ckpt end_POSTSUBSCRIPT, where p\ud835\udc5dpitalic_p is a distribution over \ud835\udc9c\ud835\udc9c\\mathcal{A}caligraphic_A and \u2113\u2113\\ellroman_\u2113 denotes logits\n;\n(b) Our forward graph for pre-training \u03d5italic-\u03d5\\phiitalic_\u03d5;\n(c) Our forward graph for fine-tuning \u03b8\ud835\udf03\\thetaitalic_\u03b8.\n indicates a frozen/constant component.", + "url": "http://arxiv.org/html/2310.03173v2/extracted/5479586/figures/snowflake.png" + }, + "3": { + "figure_path": "2310.03173v2_figure_3.png", + "caption": "Figure 3: Kernel density estimation of R~\u03b8\u2062(\u22c5)subscript~\ud835\udc45\ud835\udf03\u22c5\\tilde{R}_{\\theta}(\\cdot)over~ start_ARG italic_R end_ARG start_POSTSUBSCRIPT italic_\u03b8 end_POSTSUBSCRIPT ( \u22c5 ) evaluated on a collection of generated programs. The x-axis represents the predicted reward given by R~\u03b8subscript~\ud835\udc45\ud835\udf03\\tilde{R}_{\\theta}over~ start_ARG italic_R end_ARG start_POSTSUBSCRIPT italic_\u03b8 end_POSTSUBSCRIPT and the y-axis is its density. Color codes the true outcomes defined in equation 2.", + "url": "http://arxiv.org/html/2310.03173v2/x5.png" + }, + "4": { + "figure_path": "2310.03173v2_figure_4.png", + "caption": "Figure 4: A collection of RL applications. V and P represents value-based and policy-based RL, respectively. The x-axis shows the difficulty of obtaining rewards, while the y-axis measures the amount of off-policy data. Tasks that face significant hurdles in gathering rewards or have limited off-policy data typically lean towards policy-based algorithms. Tasks where rewards are more readily obtained or that benefit from a substantial collection of off-policy data favors value-based methods. See descriptions of each task in Table 2.", + "url": "http://arxiv.org/html/2310.03173v2/x6.png" + }, + "5": { + "figure_path": "2310.03173v2_figure_5.png", + "caption": "Figure 5: Ablation on m\ud835\udc5amitalic_m: our ranking strategy achieves consistent improvements under different budgets m\ud835\udc5amitalic_m.", + "url": "http://arxiv.org/html/2310.03173v2/x7.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Apprenticeship learning via inverse reinforcement learning.", + "author": "Pieter Abbeel and Andrew Y Ng.", + "venue": "In Proceedings of the twenty-first international conference on Machine learning, pp. 1, 2004.", + "url": null + } + }, + { + "2": { + "title": "Constrained policy optimization.", + "author": "Joshua Achiam, David Held, Aviv Tamar, and Pieter Abbeel.", + "venue": "In International conference on machine learning, pp. 22\u201331. PMLR, 2017.", + "url": null + } + }, + { + "3": { + "title": "A learning algorithm for boltzmann machines.", + "author": "David H Ackley, Geoffrey E Hinton, and Terrence J Sejnowski.", + "venue": "Cognitive science, 9(1):147\u2013169, 1985.", + "url": null + } + }, + { + "4": { + "title": "An optimistic perspective on offline reinforcement learning.", + "author": "Rishabh Agarwal, Dale Schuurmans, and Mohammad Norouzi.", + "venue": "In International Conference on Machine Learning, pp. 104\u2013114. PMLR, 2020.", + "url": null + } + }, + { + "5": { + "title": "Palm 2 technical report.", + "author": "Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al.", + "venue": "arXiv preprint arXiv:2305.10403, 2023.", + "url": null + } + }, + { + "6": { + "title": "Alphastar: An evolutionary computation perspective.", + "author": "Kai Arulkumaran, Antoine Cully, and Julian Togelius.", + "venue": "In Proceedings of the genetic and evolutionary computation conference companion, pp. 314\u2013315, 2019.", + "url": null + } + }, + { + "7": { + "title": "Program synthesis with large language models.", + "author": "Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al.", + "venue": "arXiv preprint arXiv:2108.07732, 2021.", + "url": null + } + }, + { + "8": { + "title": "Training a helpful and harmless assistant with reinforcement learning from human feedback.", + "author": "Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al.", + "venue": "arXiv preprint arXiv:2204.05862, 2022a.", + "url": null + } + }, + { + "9": { + "title": "Constitutional ai: Harmlessness from ai feedback.", + "author": "Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al.", + "venue": "arXiv preprint arXiv:2212.08073, 2022b.", + "url": null + } + }, + { + "10": { + "title": "A distributional perspective on reinforcement learning.", + "author": "Marc G Bellemare, Will Dabney, and R\u00e9mi Munos.", + "venue": "In International conference on machine learning, pp. 449\u2013458. PMLR, 2017.", + "url": null + } + }, + { + "11": { + "title": "Dynamic programming.", + "author": "Richard Bellman.", + "venue": "Science, 153(3731):34\u201337, 1966.", + "url": null + } + }, + { + "12": { + "title": "Scheduled sampling for sequence prediction with recurrent neural networks.", + "author": "Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer.", + "venue": "Advances in neural information processing systems, 28, 2015.", + "url": null + } + }, + { + "13": { + "title": "GPT-Neo: Large Scale Autoregressive Language Modeling with Mesh-Tensorflow, March 2021.", + "author": "Sid Black, Leo Gao, Phil Wang, Connor Leahy, and Stella Biderman.", + "venue": "URL https://doi.org/10.5281/zenodo.5297715.", + "url": null + } + }, + { + "14": { + "title": "Offline rl without off-policy evaluation.", + "author": "David Brandfonbrener, Will Whitney, Rajesh Ranganath, and Joan Bruna.", + "venue": "Advances in neural information processing systems, 34:4933\u20134946, 2021.", + "url": null + } + }, + { + "15": { + "title": "Superhuman ai for heads-up no-limit poker: Libratus beats top professionals.", + "author": "Noam Brown and Tuomas Sandholm.", + "venue": "Science, 359(6374):418\u2013424, 2018.", + "url": null + } + }, + { + "16": { + "title": "Language models are few-shot learners.", + "author": "Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al.", + "venue": "Advances in neural information processing systems, 33:1877\u20131901, 2020.", + "url": null + } + }, + { + "17": { + "title": "Codet: Code generation with generated tests.", + "author": "Bei Chen, Fengji Zhang, Anh Nguyen, Daoguang Zan, Zeqi Lin, Jian-Guang Lou, and Weizhu Chen.", + "venue": "In The Eleventh International Conference on Learning Representations, 2022.", + "url": null + } + }, + { + "18": { + "title": "Evaluating large language models trained on code.", + "author": "Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al.", + "venue": "arXiv preprint arXiv:2107.03374, 2021a.", + "url": null + } + }, + { + "19": { + "title": "Execution-guided neural program synthesis.", + "author": "Xinyun Chen, Chang Liu, and Dawn Song.", + "venue": "In International Conference on Learning Representations, 2018.", + "url": null + } + }, + { + "20": { + "title": "Latent execution for neural program synthesis beyond domain-specific languages.", + "author": "Xinyun Chen, Dawn Song, and Yuandong Tian.", + "venue": "Advances in Neural Information Processing Systems, 34:22196\u201322208, 2021b.", + "url": null + } + }, + { + "21": { + "title": "Palm: Scaling language modeling with pathways.", + "author": "Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al.", + "venue": "arXiv preprint arXiv:2204.02311, 2022.", + "url": null + } + }, + { + "22": { + "title": "Deep reinforcement learning from human preferences.", + "author": "Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei.", + "venue": "Advances in neural information processing systems, 30, 2017.", + "url": null + } + }, + { + "23": { + "title": "Off-policy actor-critic.", + "author": "Thomas Degris, Martha White, and Richard S Sutton.", + "venue": "arXiv preprint arXiv:1205.4839, 2012.", + "url": null + } + }, + { + "24": { + "title": "Lmflow: An extensible toolkit for finetuning and inference of large foundation models.", + "author": "Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, and Tong Zhang.", + "venue": "arXiv preprint arXiv:2306.12420, 2023.", + "url": null + } + }, + { + "25": { + "title": "Benchmarking deep reinforcement learning for continuous control.", + "author": "Yan Duan, Xi Chen, Rein Houthooft, John Schulman, and Pieter Abbeel.", + "venue": "In International conference on machine learning, pp. 1329\u20131338. PMLR, 2016.", + "url": null + } + }, + { + "26": { + "title": "Deep reinforcement learning in large discrete action spaces.", + "author": "Gabriel Dulac-Arnold, Richard Evans, Hado van Hasselt, Peter Sunehag, Timothy Lillicrap, Jonathan Hunt, Timothy Mann, Theophane Weber, Thomas Degris, and Ben Coppin.", + "venue": "arXiv preprint arXiv:1512.07679, 2015.", + "url": null + } + }, + { + "27": { + "title": "Write, execute, assess: Program synthesis with a repl.", + "author": "Kevin Ellis, Maxwell Nye, Yewen Pu, Felix Sosa, Josh Tenenbaum, and Armando Solar-Lezama.", + "venue": "Advances in Neural Information Processing Systems, 32, 2019.", + "url": null + } + }, + { + "28": { + "title": "Example-directed synthesis: a type-theoretic interpretation.", + "author": "Jonathan Frankle, Peter-Michael Osera, David Walker, and Steve Zdancewic.", + "venue": "ACM Sigplan Notices, 51(1):802\u2013815, 2016.", + "url": null + } + }, + { + "29": { + "title": "Incoder: A generative model for code infilling and synthesis.", + "author": "Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Scott Yih, Luke Zettlemoyer, and Mike Lewis.", + "venue": "In The Eleventh International Conference on Learning Representations, 2022.", + "url": null + } + }, + { + "30": { + "title": "Addressing function approximation error in actor-critic methods.", + "author": "Scott Fujimoto, Herke Hoof, and David Meger.", + "venue": "In International conference on machine learning, pp. 1587\u20131596. PMLR, 2018.", + "url": null + } + }, + { + "31": { + "title": "Iq-learn: Inverse soft-q learning for imitation.", + "author": "Divyansh Garg, Shuvam Chakraborty, Chris Cundy, Jiaming Song, and Stefano Ermon.", + "venue": "Advances in Neural Information Processing Systems, 34:4028\u20134039, 2021.", + "url": null + } + }, + { + "32": { + "title": "Q-prop: Sample-efficient policy gradient with an off-policy critic.", + "author": "Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E Turner, and Sergey Levine.", + "venue": "arXiv preprint arXiv:1611.02247, 2016.", + "url": null + } + }, + { + "33": { + "title": "Reinforced self-training (rest) for language modeling.", + "author": "Caglar Gulcehre, Tom Le Paine, Srivatsan Srinivasan, Ksenia Konyushkova, Lotte Weerts, Abhishek Sharma, Aditya Siddhant, Alex Ahern, Miaosen Wang, Chenjie Gu, et al.", + "venue": "arXiv preprint arXiv:2308.08998, 2023.", + "url": null + } + }, + { + "34": { + "title": "Spreadsheet data manipulation using examples.", + "author": "Sumit Gulwani, William R Harris, and Rishabh Singh.", + "venue": "Communications of the ACM, 55(8):97\u2013105, 2012.", + "url": null + } + }, + { + "35": { + "title": "Reinforcement learning with deep energy-based policies.", + "author": "Tuomas Haarnoja, Haoran Tang, Pieter Abbeel, and Sergey Levine.", + "venue": "In International conference on machine learning, pp. 1352\u20131361. PMLR, 2017.", + "url": null + } + }, + { + "36": { + "title": "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor.", + "author": "Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine.", + "venue": "In International conference on machine learning, pp. 1861\u20131870. PMLR, 2018.", + "url": null + } + }, + { + "37": { + "title": "Learning to play in a day: Faster deep reinforcement learning by optimality tightening.", + "author": "Frank S He, Yang Liu, Alexander G Schwing, and Jian Peng.", + "venue": "In International Conference on Learning Representations, 2016.", + "url": null + } + }, + { + "38": { + "title": "Measuring coding challenge competence with apps.", + "author": "Dan Hendrycks, Steven Basart, Saurav Kadavath, Mantas Mazeika, Akul Arora, Ethan Guo, Collin Burns, Samir Puranik, Horace He, Dawn Song, et al.", + "venue": "arXiv preprint arXiv:2105.09938, 2021.", + "url": null + } + }, + { + "39": { + "title": "Generative adversarial imitation learning.", + "author": "Jonathan Ho and Stefano Ermon.", + "venue": "Advances in neural information processing systems, 29, 2016.", + "url": null + } + }, + { + "40": { + "title": "Training compute-optimal large language models.", + "author": "Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al.", + "venue": "arXiv preprint arXiv:2203.15556, 2022.", + "url": null + } + }, + { + "41": { + "title": "The curious case of neural text degeneration.", + "author": "Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi.", + "venue": "In International Conference on Learning Representations, 2019.", + "url": null + } + }, + { + "42": { + "title": "An off-policy policy gradient theorem using emphatic weightings.", + "author": "Ehsan Imani, Eric Graves, and Martha White.", + "venue": "Advances in Neural Information Processing Systems, 31, 2018.", + "url": null + } + }, + { + "43": { + "title": "Learning from a learner.", + "author": "Alexis Jacq, Matthieu Geist, Ana Paiva, and Olivier Pietquin.", + "venue": "In International Conference on Machine Learning, pp. 2990\u20132999. PMLR, 2019.", + "url": null + } + }, + { + "44": { + "title": "Minimax value interval for off-policy evaluation and policy optimization.", + "author": "Nan Jiang and Jiawei Huang.", + "venue": "Advances in Neural Information Processing Systems, 33:2747\u20132758, 2020.", + "url": null + } + }, + { + "45": { + "title": "Scalable deep reinforcement learning for vision-based robotic manipulation.", + "author": "Dmitry Kalashnikov, Alex Irpan, Peter Pastor, Julian Ibarz, Alexander Herzog, Eric Jang, Deirdre Quillen, Ethan Holly, Mrinal Kalakrishnan, Vincent Vanhoucke, et al.", + "venue": "In Conference on Robot Learning, pp. 651\u2013673. PMLR, 2018.", + "url": null + } + }, + { + "46": { + "title": "Learning to drive in a day.", + "author": "Alex Kendall, Jeffrey Hawke, David Janz, Przemyslaw Mazur, Daniele Reda, John-Mark Allen, Vinh-Dieu Lam, Alex Bewley, and Amar Shah.", + "venue": "In 2019 International Conference on Robotics and Automation (ICRA), pp. 8248\u20138254. IEEE, 2019.", + "url": null + } + }, + { + "47": { + "title": "Bert: Pre-training of deep bidirectional transformers for language understanding.", + "author": "Jacob Devlin Ming-Wei Chang Kenton and Lee Kristina Toutanova.", + "venue": "In Proceedings of NAACL-HLT, pp. 4171\u20134186, 2019.", + "url": null + } + }, + { + "48": { + "title": "Actor-critic algorithms.", + "author": "Vijay Konda and John Tsitsiklis.", + "venue": "Advances in neural information processing systems, 12, 1999.", + "url": null + } + }, + { + "49": { + "title": "Conservative q-learning for offline reinforcement learning.", + "author": "Aviral Kumar, Aurick Zhou, George Tucker, and Sergey Levine.", + "venue": "Advances in Neural Information Processing Systems, 33:1179\u20131191, 2020.", + "url": null + } + }, + { + "50": { + "title": "Coderl: Mastering code generation through pretrained models and deep reinforcement learning.", + "author": "Hung Le, Yue Wang, Akhilesh Deepak Gotmare, Silvio Savarese, and Steven Chu Hong Hoi.", + "venue": "Advances in Neural Information Processing Systems, 35:21314\u201321328, 2022.", + "url": null + } + }, + { + "51": { + "title": "Offline reinforcement learning: Tutorial, review, and perspectives on open problems.", + "author": "Sergey Levine, Aviral Kumar, George Tucker, and Justin Fu.", + "venue": "arXiv preprint arXiv:2005.01643, 2020.", + "url": null + } + }, + { + "52": { + "title": "Starcoder: may the source be with you!", + "author": "Raymond Li, Loubna Ben Allal, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov, Chenghao Mou, Marc Marone, Christopher Akiki, Jia Li, Jenny Chim, et al.", + "venue": "arXiv preprint arXiv:2305.06161, 2023.", + "url": null + } + }, + { + "53": { + "title": "Competition-level code generation with alphacode.", + "author": "Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, R\u00e9mi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, et al.", + "venue": "Science, 378(6624):1092\u20131097, 2022.", + "url": null + } + }, + { + "54": { + "title": "Rouge: A package for automatic evaluation of summaries.", + "author": "Chin-Yew Lin.", + "venue": "In Text summarization branches out, pp. 74\u201381, 2004.", + "url": null + } + }, + { + "55": { + "title": "Rltf: Reinforcement learning from unit test feedback.", + "author": "Jiate Liu, Yiqin Zhu, Kaiwen Xiao, Qiang Fu, Xiao Han, Wei Yang, and Deheng Ye.", + "venue": "arXiv preprint arXiv:2307.04349, 2023.", + "url": null + } + }, + { + "56": { + "title": "Off-policy policy gradient with stationary distribution correction.", + "author": "Yao Liu, Adith Swaminathan, Alekh Agarwal, and Emma Brunskill.", + "venue": "In Uncertainty in artificial intelligence, pp. 1180\u20131190. PMLR, 2020.", + "url": null + } + }, + { + "57": { + "title": "Decoupled weight decay regularization.", + "author": "Ilya Loshchilov and Frank Hutter.", + "venue": "In International Conference on Learning Representations, 2018.", + "url": null + } + }, + { + "58": { + "title": "Wizardcoder: Empowering code large language models with evol-instruct.", + "author": "Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, and Daxin Jiang.", + "venue": "arXiv preprint arXiv:2306.08568, 2023.", + "url": null + } + }, + { + "59": { + "title": "Playing atari with deep reinforcement learning.", + "author": "Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller.", + "venue": "arXiv preprint arXiv:1312.5602, 2013.", + "url": null + } + }, + { + "60": { + "title": "Deepstack: Expert-level artificial intelligence in heads-up no-limit poker.", + "author": "Matej Morav\u010d\u00edk, Martin Schmid, Neil Burch, Viliam Lis\u1ef3, Dustin Morrill, Nolan Bard, Trevor Davis, Kevin Waugh, Michael Johanson, and Michael Bowling.", + "venue": "Science, 356(6337):508\u2013513, 2017.", + "url": null + } + }, + { + "61": { + "title": "Bridging the gap between value and policy based reinforcement learning.", + "author": "Ofir Nachum, Mohammad Norouzi, Kelvin Xu, and Dale Schuurmans.", + "venue": "Advances in neural information processing systems, 30, 2017.", + "url": null + } + }, + { + "62": { + "title": "Visual reinforcement learning with imagined goals.", + "author": "Ashvin V Nair, Vitchyr Pong, Murtaza Dalal, Shikhar Bahl, Steven Lin, and Sergey Levine.", + "venue": "Advances in neural information processing systems, 31, 2018.", + "url": null + } + }, + { + "63": { + "title": "Algorithms for inverse reinforcement learning.", + "author": "Andrew Y Ng, Stuart Russell, et al.", + "venue": "In Icml, volume 1, pp. 2, 2000.", + "url": null + } + }, + { + "64": { + "title": "Codegen: An open large language model for code with multi-turn program synthesis.", + "author": "Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong.", + "venue": "In The Eleventh International Conference on Learning Representations, 2022.", + "url": null + } + }, + { + "65": { + "title": "Gpt-4 technical report.", + "author": "R OpenAI.", + "venue": "arXiv, pp. 2303\u201308774, 2023.", + "url": null + } + }, + { + "66": { + "title": "Type-and-example-directed program synthesis.", + "author": "Peter-Michael Osera and Steve Zdancewic.", + "venue": "ACM SIGPLAN Notices, 50(6):619\u2013630, 2015.", + "url": null + } + }, + { + "67": { + "title": "Training language models to follow instructions with human feedback.", + "author": "Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al.", + "venue": "Advances in Neural Information Processing Systems, 35:27730\u201327744, 2022.", + "url": null + } + }, + { + "68": { + "title": "Bleu: a method for automatic evaluation of machine translation.", + "author": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu.", + "venue": "In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pp. 311\u2013318, 2002.", + "url": null + } + }, + { + "69": { + "title": "Abstract syntax networks for code generation and semantic parsing.", + "author": "Maxim Rabinovich, Mitchell Stern, and Dan Klein.", + "venue": "arXiv preprint arXiv:1704.07535, 2017.", + "url": null + } + }, + { + "70": { + "title": "Language models are unsupervised multitask learners.", + "author": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al.", + "venue": "OpenAI blog, 1(8):9, 2019.", + "url": null + } + }, + { + "71": { + "title": "Scaling language models: Methods, analysis & insights from training gopher.", + "author": "Jack W Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, et al.", + "venue": "arXiv preprint arXiv:2112.11446, 2021.", + "url": null + } + }, + { + "72": { + "title": "Direct preference optimization: Your language model is secretly a reward model.", + "author": "Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D Manning, and Chelsea Finn.", + "venue": "arXiv preprint arXiv:2305.18290, 2023.", + "url": null + } + }, + { + "73": { + "title": "Exploring the limits of transfer learning with a unified text-to-text transformer.", + "author": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu.", + "venue": "The Journal of Machine Learning Research, 21(1):5485\u20135551, 2020.", + "url": null + } + }, + { + "74": { + "title": "Sequence level training with recurrent neural networks.", + "author": "Marc\u2019Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba.", + "venue": "arXiv preprint arXiv:1511.06732, 2015.", + "url": null + } + }, + { + "75": { + "title": "Self-critical sequence training for image captioning.", + "author": "Steven J Rennie, Etienne Marcheret, Youssef Mroueh, Jerret Ross, and Vaibhava Goel.", + "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7008\u20137024, 2017.", + "url": null + } + }, + { + "76": { + "title": "Code llama: Open foundation models for code.", + "author": "Baptiste Rozi\u00e8re, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, J\u00e9r\u00e9my Rapin, et al.", + "venue": "arXiv preprint arXiv:2308.12950, 2023.", + "url": null + } + }, + { + "77": { + "title": "Proximal policy optimization algorithms.", + "author": "John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov.", + "venue": "arXiv preprint arXiv:1707.06347, 2017.", + "url": null + } + }, + { + "78": { + "title": "Execution-based code generation using deep reinforcement learning.", + "author": "Parshin Shojaee, Aneesh Jain, Sindhu Tipirneni, and Chandan K Reddy.", + "venue": "arXiv preprint arXiv:2301.13816, 2023.", + "url": null + } + }, + { + "79": { + "title": "Lecture 7: Policy gradient.", + "author": "David Silver.", + "venue": "UCL Course on RL, 2015.", + "url": null + } + }, + { + "80": { + "title": "Mastering the game of go with deep neural networks and tree search.", + "author": "David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al.", + "venue": "nature, 529(7587):484\u2013489, 2016.", + "url": null + } + }, + { + "81": { + "title": "Program synthesis through reinforcement learning guided tree search.", + "author": "Riley Simmons-Edler, Anders Miltner, and Sebastian Seung.", + "venue": "arXiv preprint arXiv:1806.02932, 2018.", + "url": null + } + }, + { + "82": { + "title": "Learning to summarize with human feedback.", + "author": "Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F Christiano.", + "venue": "Advances in Neural Information Processing Systems, 33:3008\u20133021, 2020.", + "url": null + } + }, + { + "83": { + "title": "A methodology for lisp program construction from examples.", + "author": "Phillip D Summers.", + "venue": "Journal of the ACM (JACM), 24(1):161\u2013175, 1977.", + "url": null + } + }, + { + "84": { + "title": "Action branching architectures for deep reinforcement learning.", + "author": "Arash Tavakoli, Fabio Pardo, and Petar Kormushev.", + "venue": "In Proceedings of the aaai conference on artificial intelligence, volume 32, 2018.", + "url": null + } + }, + { + "85": { + "title": "Llama: Open and efficient foundation language models.", + "author": "Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth\u00e9e Lacroix, Baptiste Rozi\u00e8re, Naman Goyal, Eric Hambro, Faisal Azhar, et al.", + "venue": "arXiv preprint arXiv:2302.13971, 2023.", + "url": null + } + }, + { + "86": { + "title": "Deep reinforcement learning with double q-learning.", + "author": "Hado Van Hasselt, Arthur Guez, and David Silver.", + "venue": "In Proceedings of the AAAI conference on artificial intelligence, volume 30, 2016.", + "url": null + } + }, + { + "87": { + "title": "GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model.", + "author": "Ben Wang and Aran Komatsuzaki.", + "venue": "https://github.com/kingoflolz/mesh-transformer-jax, May 2021.", + "url": null + } + }, + { + "88": { + "title": "Compilable neural code generation with compiler feedback.", + "author": "Xin Wang, Yasheng Wang, Yao Wan, Fei Mi, Yitong Li, Pingyi Zhou, Jin Liu, Hao Wu, Xin Jiang, and Qun Liu.", + "venue": "In Findings of the Association for Computational Linguistics: ACL 2022, pp. 9\u201319, 2022.", + "url": null + } + }, + { + "89": { + "title": "Codet5: Identifier-aware unified pre-trained encoder-decoder models for code understanding and generation.", + "author": "Yue Wang, Weishi Wang, Shafiq Joty, and Steven CH Hoi.", + "venue": "In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 8696\u20138708, 2021.", + "url": null + } + }, + { + "90": { + "title": "Codet5+: Open code large language models for code understanding and generation.", + "author": "Yue Wang, Hung Le, Akhilesh Deepak Gotmare, Nghi DQ Bui, Junnan Li, and Steven CH Hoi.", + "venue": "arXiv preprint arXiv:2305.07922, 2023.", + "url": null + } + }, + { + "91": { + "title": "Dueling network architectures for deep reinforcement learning.", + "author": "Ziyu Wang, Tom Schaul, Matteo Hessel, Hado Hasselt, Marc Lanctot, and Nando Freitas.", + "venue": "In International conference on machine learning, pp. 1995\u20132003. PMLR, 2016.", + "url": null + } + }, + { + "92": { + "title": "Q-learning.", + "author": "Christopher JCH Watkins and Peter Dayan.", + "venue": "Machine learning, 8:279\u2013292, 1992.", + "url": null + } + }, + { + "93": { + "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning.", + "author": "Ronald J Williams.", + "venue": "Machine learning, 8:229\u2013256, 1992.", + "url": null + } + }, + { + "94": { + "title": "Bellman-consistent pessimism for offline reinforcement learning.", + "author": "Tengyang Xie, Ching-An Cheng, Nan Jiang, Paul Mineiro, and Alekh Agarwal.", + "venue": "Advances in neural information processing systems, 34:6683\u20136694, 2021a.", + "url": null + } + }, + { + "95": { + "title": "Policy finetuning: Bridging sample-efficient offline and online reinforcement learning.", + "author": "Tengyang Xie, Nan Jiang, Huan Wang, Caiming Xiong, and Yu Bai.", + "venue": "Advances in neural information processing systems, 34:27395\u201327407, 2021b.", + "url": null + } + }, + { + "96": { + "title": "Wizardlm: Empowering large language models to follow complex instructions.", + "author": "Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang.", + "venue": "arXiv preprint arXiv:2304.12244, 2023.", + "url": null + } + }, + { + "97": { + "title": "Graph-based, self-supervised program repair from diagnostic feedback.", + "author": "Michihiro Yasunaga and Percy Liang.", + "venue": "In International Conference on Machine Learning, pp. 10799\u201310808. PMLR, 2020.", + "url": null + } + }, + { + "98": { + "title": "Actor-critic alignment for offline-to-online reinforcement learning.", + "author": "Zishun Yu and Xinhua Zhang.", + "venue": "In Proceedings of the 40th International Conference on Machine Learning, volume 202, pp. 40452\u201340474, 2023.", + "url": null + } + }, + { + "99": { + "title": "Offline reinforcement learning with realizability and single-policy concentrability.", + "author": "Wenhao Zhan, Baihe Huang, Audrey Huang, Nan Jiang, and Jason Lee.", + "venue": "In Conference on Learning Theory, pp. 2730\u20132775. PMLR, 2022.", + "url": null + } + }, + { + "100": { + "title": "Codegeex: A pre-trained model for code generation with multilingual evaluations on humaneval-x.", + "author": "Qinkai Zheng, Xiao Xia, Xu Zou, Yuxiao Dong, Shan Wang, Yufei Xue, Zihan Wang, Lei Shen, Andi Wang, Yang Li, et al.", + "venue": "arXiv preprint arXiv:2303.17568, 2023.", + "url": null + } + }, + { + "101": { + "title": "Seq2sql: Generating structured queries from natural language using reinforcement learning.", + "author": "Victor Zhong, Caiming Xiong, and Richard Socher.", + "venue": "arXiv preprint arXiv:1709.00103, 2017.", + "url": null + } + }, + { + "102": { + "title": "Maximum entropy inverse reinforcement learning.", + "author": "Brian D Ziebart, Andrew L Maas, J Andrew Bagnell, Anind K Dey, et al.", + "venue": "In Aaai, volume 8, pp. 1433\u20131438. Chicago, IL, USA, 2008.", + "url": null + } + }, + { + "103": { + "title": "Fine-tuning language models from human preferences.", + "author": "Daniel M Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving.", + "venue": "arXiv preprint arXiv:1909.08593, 2019.", + "url": null + } + }, + { + "104": { + "title": "Regret minimization in games with incomplete information.", + "author": "Martin Zinkevich, Michael Johanson, Michael Bowling, and Carmelo Piccione.", + "venue": "Advances in neural information processing systems, 20, 2007.", + "url": null + } + }, + { + "105": { + "title": "Automatic program synthesis of long programs with a learned garbage collector.", + "author": "Amit Zohar and Lior Wolf.", + "venue": "Advances in neural information processing systems, 31, 2018.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2310.03173v2" +} \ No newline at end of file