text stringlengths 0 339 |
|---|
In contrast to RNN sequence-to-sequence models [37], the Transformer outperforms the BerkeleyParser [29] even when training only on the WSJ training set of 40K sentences. |
7 Conclusion |
In this work, we presented the Transformer, the first sequence transduction model based entirely on |
attention, replacing the recurrent layers most commonly used in encoder-decoder architectures with |
multi-headed self-attention. |
For translation tasks, the Transformer can be trained significantly faster than architectures based |
on recurrent or convolutional layers. On both WMT 2014 English-to-German and WMT 2014 |
English-to-French translation tasks, we achieve a new state of the art. In the former task our best |
model outperforms even all previously reported ensembles. |
We are excited about the future of attention-based models and plan to apply them to other tasks. We |
plan to extend the Transformer to problems involving input and output modalities other than text and |
to investigate local, restricted attention mechanisms to efficiently handle large inputs and outputs |
such as images, audio and video. Making generation less sequential is another research goals of ours. |
The code we used to train and evaluate our models is available at https://github.com/ |
tensorflow/tensor2tensor. |
Acknowledgements We are grateful to Nal Kalchbrenner and Stephan Gouws for their fruitful |
comments, corrections and inspiration. |
Evaluating Large Language Models Trained on Code |
Mark Chen * 1 Jerry Tworek * 1 Heewoo Jun * 1 Qiming Yuan * 1 Henrique Ponde de Oliveira Pinto * 1 |
Jared Kaplan * 2 Harri Edwards 1 Yuri Burda 1 Nicholas Joseph 2 Greg Brockman 1 Alex Ray 1 Raul Puri 1 |
Gretchen Krueger 1 Michael Petrov 1 Heidy Khlaaf 3 Girish Sastry 1 Pamela Mishkin 1 Brooke Chan 1 |
Scott Gray 1 Nick Ryder 1 Mikhail Pavlov 1 Alethea Power 1 Lukasz Kaiser 1 Mohammad Bavarian 1 |
Clemens Winter 1 Philippe Tillet 1 Felipe Petroski Such 1 Dave Cummings 1 Matthias Plappert 1 |
Fotios Chantzis 1 Elizabeth Barnes 1 Ariel Herbert-Voss 1 William Hebgen Guss 1 Alex Nichol 1 Alex Paino 1 |
Nikolas Tezak 1 Jie Tang 1 |
Igor Babuschkin 1 Suchir Balaji 1 Shantanu Jain 1 William Saunders 1 |
Christopher Hesse 1 Andrew N. Carr 1 Jan Leike 1 Josh Achiam 1 Vedant Misra 1 Evan Morikawa 1 |
Alec Radford 1 Matthew Knight 1 Miles Brundage 1 Mira Murati 1 Katie Mayer 1 Peter Welinder 1 |
Bob McGrew 1 Dario Amodei 2 Sam McCandlish 2 |
Ilya Sutskever 1 Wojciech Zaremba 1 |
Abstract |
We introduce Codex, a GPT language model finetuned on publicly available code from GitHub, |
and study its Python code-writing capabilities. |
A distinct production version of Codex powers |
GitHub Copilot. On HumanEval, a new evaluation set we release to measure functional correctness for synthesizing programs from docstrings, |
our model solves 28.8% of the problems, while |
GPT-3 solves 0% and GPT-J solves 11.4%. Furthermore, we find that repeated sampling from the |
model is a surprisingly effective strategy for producing working solutions to difficult prompts. Using this method, we solve 70.2% of our problems |
with 100 samples per problem. Careful investigation of our model reveals its limitations, including |
difficulty with docstrings describing long chains |
of operations and with binding operations to variables. Finally, we discuss the potential broader |
impacts of deploying powerful code generation |
technologies, covering safety, security, and economics. |
*Equal contribution |
1OpenAI, San Francisco, California, USA. |
2Anthropic AI, San Francisco, California, USA. Work performed while at OpenAI. |
3Zipline, South San Francisco, California, USA. Work performed while at OpenAI. |
Correspondence to: Mark Chen <mark@openai.com>, |
Jerry Tworek <jt@openai.com>, Heewoo Jun <heewoo@openai.com>, Qiming Yuan <qiming@openai.com>. |
1. Introduction |
Scalable sequence prediction models (Graves, 2014; |
Vaswani et al., 2017; Child et al., 2019) have become a |
general-purpose method for generation and representation |
learning in many domains, including natural language processing (Mikolov et al., 2013; Sutskever et al., 2014; Dai & |
Le, 2015; Peters et al., 2018; Radford et al., 2018; Devlin |
et al., 2018), computer vision (Van Oord et al., 2016; Menick |
& Kalchbrenner, 2018; Chen et al., 2020; Bao et al., 2021), |
audio and speech processing (Oord et al., 2016; 2018; Dhariwal et al., 2020; Baevski et al., 2020), biology (Alley et al., |
2019; Rives et al., 2021), and even across multiple modalities (Das et al., 2017; Lu et al., 2019; Ramesh et al., 2021; |
Zellers et al., 2021). More recently, language models have |
also fueled progress towards the longstanding challenge |
of program synthesis (Simon, 1963; Manna & Waldinger, |
1971), spurred by the presence of code in large datasets |
(Husain et al., 2019; Gao et al., 2020) and the resulting programming capabilities of language models trained on these |
datasets (Wang & Komatsuzaki, 2021). Popular language |
modeling objectives like masked language modeling (Devlin |
et al., 2018) and span prediction (Raffel et al., 2020) have |
also been adapted to train their programming counterparts |
CodeBERT (Feng et al., 2020) and PyMT5 (Clement et al., |
2020). |
Similarly, our early investigation of GPT-3 (Brown et al., |
2020) revealed that it could generate simple programs from |
Python docstrings. While rudimentary, this capability was |
exciting because GPT-3 was not explicitly trained for code |
generation. Given the considerable success of large language models in other modalities and the abundance of |
publicly available code, we hypothesized that a specialized |
GPT model, called Codex, could excel at a variety of coding |
tasks. This paper describes several early Codex models, |
whose descendants power GitHub Copilot and the Codex |
models in the OpenAI API. |
arXiv:2107.03374v2 [cs.LG] 14 Jul 2021 |
Evaluating Large Language Models Trained on Code |
Figure 1. Pass rates of our models on the HumanEval dataset as a |
function of model size. When a single sample is generated for each |
problem, GPT-12B solves no problems, but Codex (fine-tuned |
on code) solves 28.8% of the problems, and Codex-S (further |
fine-tuned on correctly implemented standalone functions) solves |
37.7% of the problems. From here, further gains can be realized by |
generating 100 samples per problem and selecting the sample with |
the highest mean log-probability (44.5% solved) or by selecting |
the sample that passes the unit tests (77.5% solved). All samples |
are generated with temperature 0.8. |
In this work, we focus on the task of generating standalone Python functions from docstrings, and evaluate the |
correctness of code samples automatically through unit |
tests. This is in contrast to natural language generation, |
where samples are typically evaluated by heuristics or by |
human evaluators. To accurately benchmark our model, |
we create a dataset of 164 original programming problems |
with unit tests. These problems assess language comprehension, algorithms, and simple mathematics, with some |
comparable to simple software interview questions. We |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.