diff --git "a/09FST4oBgHgl3EQfWDi_/content/tmp_files/load_file.txt" "b/09FST4oBgHgl3EQfWDi_/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/09FST4oBgHgl3EQfWDi_/content/tmp_files/load_file.txt" @@ -0,0 +1,962 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf,len=961 +page_content='FLAME: A small language model for spreadsheet formulas Harshit Joshi1 , Abishai Ebenezer1 , Jos´e Cambronero2∗ , Sumit Gulwani2∗ , Aditya Kanade3∗ , Vu Le2∗ , Ivan Radiˇcek4∗ , Gust Verbruggen5∗ 1Microsoft, India 2Microsoft, USA 3Microsoft Research, India 4Microsoft, Croatia 5Microsoft, Belgium {t-hjoshi, t-aebenezer, jcambronero, sumitg, kanadeaditya, levu, ivradice, gverbruggen}@microsoft.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='com Abstract The widespread use of spreadsheet environments by billions of users presents a unique opportunity for formula-authoring assistance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Although large language models, such as Codex, can assist in general-purpose languages, they are expensive to train and challenging to deploy due to their large model sizes (up to billions of parameters).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' More- over, they require hundreds of gigabytes of train- ing data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We present FLAME, a T5-based model trained on Excel formulas that leverages domain insights to achieve competitive performance with a substantially smaller model (60M parameters) and two orders of magnitude less training data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We cu- rate a training dataset using sketch deduplication, introduce an Excel-specific formula tokenizer for our model, and use domain-specific versions of masked span prediction and noisy auto-encoding as pretraining objectives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We evaluate FLAME on formula repair, formula auto-completion, and a novel task called syntax reconstruction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' FLAME (60M) can outperform much larger models, such as Codex-Davinci (175B), Codex-Cushman (12B), and CodeT5 (220M), in 6 out of 10 settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' 1 Introduction Despite a much larger user base, spreadsheet environments do not have access to nearly the same range of productivity tools as available for general programming environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' The latter typically have code completion, refactoring, linting, and a wide range of extensions for additional functionality, like generating tests, inserting code snippets, and summarizing code.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Many of these advanced programming assistance tools are driven by advances in large language models trained on code (LLMCs).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Codex [Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2021a] is used for code completion [GitHub, 2021] and repair [Joshi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2022], AlphaCode [Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2022a] solves competitive programming problems, [Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2022b] built a code review system, and many other models show great performance in code related tasks [Xu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Fried et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Nijkamp et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2022].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' ∗Listed in alphabetical order Formula Autocompletion Last Mile Repair Syntax Reconstruction 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='90 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='85 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='80 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='75 FLAME (16M) FLAME (60M) CodeT5 (220M) Codex Cushman (12B) Codex Davinci (175B) Model Parameters (log-scale) Performance 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='40 Z Figure 1: A summary of model comparisons in fine-tuned setting for different formula assistance tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We show the results under a top-5 cutoff on a public excel benchmark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Note that all Codex-Davinci results are few-shot, and Autocompletion is zeroshot for all systems except CodeT5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' For Autocompletion, results represent the fraction of benchmarks successfully (based on a sketch match metric) completed given 90% of the prefix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' To capture the complexity and variety of code and com- ments in different languages, these models need billions of parameters—the smallest variant of Codex, used by GitHub Copilot, has 12 billion parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' As a result, these models are trained for long periods on corpora containing millions of programs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' For example, Incoder 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='7B used 159GB of code over a period of 24 days on 248 V100 GPUs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' In addition to training costs, inference on large models is expensive due to extensive hardware requirements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' For example, using Codex- Davinci to process 1000 tokens, including the prompt, costs $0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='02 USD [OpenAI, 2023].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' In a spreadsheet environment used by billions, these costs quickly add up.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' In this paper, we present FLAME, a Formula LAnguage Model for Excel trained exclusively on Excel formulas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' FLAME is based on T5-small [Raffel et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2020] and has only 60 mil- lions parameters, yet it can compete with much larger models (up to 175B parameters) on three formula authoring tasks: last-mile repair, formula auto-completion and syntax recon- struction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Syntax reconstruction is a novel task where all de- limiters are removed from a formula, resulting in a flat stream arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='13779v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='PL] 31 Jan 2023 =SUMIF(B1:B5, A1:A5, "Yes") =SUMIF(B1:B5, "Yes", A1:A5) Last Mile Repair =AVERAGEIFS(B4:M4 =AVERAGEIFS(B4:M4, B4:M4, ">0") Formula Autocompletion Syntax Reconstruction IFERROR VLOOKUP A2 Sheet2 $A$1:$E$22 5 0 "Not available" =IFERROR(VLOOKUP(A2, Sheet2!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='$A$1:$E$22, 5, 0), "Not available") Figure 2: We consider three downstream tasks: Last Mile Repair, Formula Autocompletion, and Syntax Reconstruction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Red and green colors denote the input and the expected output, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Yellow text denotes the buggy part of the formula in the repair task, where the user has swapped the correct order of arguments resulting in a type error.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Each task shows a case that FLAME successfully solves.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' of tokens, and the model must recover the original formula.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Figure 1 shows a high-level summary of results as a function of model size on a public dataset, where FLAME can outper- form larger models in all three tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Figure 2 provides real examples, solved by FLAME, for each of these tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' There are three main challenges involved in training a model for Excel formulas: obtaining diverse training data, tokenizing their unique structure, and pretraining with objectives that teach the model about this distinctive structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Spreadsheets contain many duplicate formulas due to copying down for- mula cells.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We reduced our corpus from 927M formulas down to 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='1M by comparing formulas based on syntax, creating 540MB of training data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We combine formulas insights with byte pair encoding (BPE) to train an Excel-specific tokenizer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' In addition to two generic objectives (tail-masking and de- noising auto-encoding), we introduce two new pretraining objectives designed for formulas: language-aware masked span prediction and user-inspired denoising.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We extensively evaluate FLAME on three downstream tasks, showing that our proposed solutions to the modeling chal- lenges significantly improve the performance of FLAME over T5-based models and can compete with much larger models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Specifically, we find that FLAME can outperform other models in 6 out of 10 settings in our evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We make the following contributions: We present FLAME, the first language model designed exclusively for Excel formulas (§3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' To this end, we introduce domain-specific dataset curation (§3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='2), tok- enization (§3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='3), and pretraining objectives (§3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We extensively evaluate FLAME on three formula assis- tance tasks: last-mile repair, formula autocompletion, and syntax reconstruction (§4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We compare our performance to two variants of Codex (latest version of Cushman and Davinci) and CodeT5, and finetune Cushman for downstream tasks (§4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We show that FLAME can outperform larger models in 6 out of 10 settings (§5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We analyze the contribution of different design choices for FLAME (§5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='2,§5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='3) 2 Related Work Language models for code Multiple popular language model architectures have been successfully adapted to code.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' CodeBERT [Feng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2020] trained BERT (encoder) on nat- ural language and code.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' CodeT5 [Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2021] trained T5 (encoder-decoder) on a similar corpus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Codex [Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2021a], PolyCoder [Xu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2022], or CodeGen [Ni- jkamp et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2022] are all trained variants of GPT (decoder).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' These models are trained on multiple programming languages and use pretraining objectives to understand or generate code and natural language, but do not adapt them for specific lan- guages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' In contrast, FLAME exploits a single domain to use domain-specific objectives, such as span masking that respects programming language tokens, to learn a better representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Evaluating code models Many tasks have been presented to evaluate code models, and CodeXGLUE [Lu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2021] bundles most of these.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' These tasks are categorized by the modality (text/code) of their input and output.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' FLAME is trained on formulas exclusively and is focused on formula tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We now describe related work for these tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Formula repair A popular code authoring task is repairing small mistakes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' DeepFix [Gupta et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2017], BIFI [Yasunaga and Liang, 2021], Dr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='Repair [Yasunaga and Liang, 2020], and TFix [Berabi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2021] use deep learning to perform syntax, compilation, or diagnostics repair in general-purpose program- ming languages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' LaMirage [Bavishi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2022] generates repair engines for low-code languages and coins the term last- mile repair for these types of fixes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' RING [Joshi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2022] uses Codex to fix last-mile errors across multiple languages, but it requires additional information, such as examples of repairs and compiler messages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Formula autocompletion The generative nature of LLMCs makes them serve as code-completion engines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' This feature has been shipped in commercial products, such as GitHub Copilot in Visual studio Code [GitHub, 2021] and IntelliCode in Visual Studio [Svyatkovskiy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2020].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Spreadsheet- Coder [Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2021b] is a model designed for predicting simple formulas from context in the spreadsheet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Syntax reconstruction Syntax reconstruction, where all de- limiters in a formula are removed, resembles component-based program synthesis, where partial programs are combined into a program that satisfies a specification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Components are pro- vided by a user [Jha et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2010], generated by a model [Rah- mani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2021], or defined by an API [Feng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2017].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' 3 FLAME: Approach We now describe the FLAME architecture and how it overcomes the three key challenges (data, tokenization, and training) in pretraining a general language model for formulas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='1 Architecture To facilitate both formula understanding and generation, FLAME follows an encoder-decoder architecture based on T5 [Raffel et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2020].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Encoder models like CodeBERT [Feng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2020] show remarkable code understanding capabilities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Decoder models like CodeGen [Nijkamp et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2022] and User-inspired Denoising INDEX(summary!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='N:N;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' MATCH(A350;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' summary!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='$D:$D, 0, 0)) INDEX(summary!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='N:N;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' MATCH(A350;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' summary!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='$D:$D;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' 0)) Change Function Arity Comma to Semi colon 17 user-inspired noise operators INDEX(summary!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='N:N, MATCH(A350, summary!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='$D:$D, 0)) INDEX(summary!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='N:N, MAT Tail Masking INDEX(summary!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='N:N, (A350, summary!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='$D:$D, 0) low mask rate, low average span length INDEX2(summary!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='N:N, yeMATCH(A350, summary!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='$[D:$D, 0)) Random Noising INDEX(!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='N:N, MATCHA350, summary!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=':$D, 0) high mask rate, low average span length Language-Aware Span Masking different combinations of high and low masking rate and average span lengths Figure 3: Four pretraining objectives used by FLAME.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' For each batch, we randomly (with weighted probability) choose one of the four objectives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Generic objectives (tail masking and random noise) are shown with a yellow header, while formula-specific variants (language-aware span masking and user-inspired noise) are shown with a green header.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We depict inserted tokens with red and deleted tokens with blue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Codex [Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2021a] perform well on code generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Encoder-decoder models seek to blend these strengths.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='2 Training Data We start from a dataset of 927M formulas drawn from a corpus of 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='8M publicly available Excel workbooks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='1 Each workbook contains one or more worksheets, and each worksheet contains zero or more formulas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Formulas in spreadsheets are often repeated with minor cell reference changes across rows or columns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' For example, a user can drag a formula to another cell to repeat a computation on neighboring cell values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We compute formula sketches to preserve a single instance of each unique formula per workbook.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' In a formula sketch, numeric constants, string constants and cell references are replaced by their token type.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' For example, the sketch of =SUM(A1:A10) is =SUM(cell:cell).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' After applying sketch deduplication, we are left with 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='1M formulas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Note that ap- plying this globally to the corpus, rather than per workbook, results in only 591K formulas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We found this globally dedu- plicated corpus to be insufficient for training as it skews the distribution of formulas —see evaluation (§5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='2) for details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='3 Tokenizing Formulas Tokenization is an essential part of language models [Domingo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2018].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' A popular method for tokenization is byte pair encoding (BPE) [Sennrich et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2016].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' BPE iteratively joins consecutive tokens that appear together most frequently until a target vocabulary size is reached.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' However, this procedure can have adverse effects on formulas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' For example, SUM and ( are combined to get SUM(, which can reduce expressiveness and hurt performance for tasks like repair.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Our tokenizer considers punctuation, whitespace, built-in function names, and digits as individual tokens [Chowdhery et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2022] and applies BPE [Radford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2019] to the re- maining parts of formulas, like string constants.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Excel is case insensitive (with the exception of string contents) so we con- vert all input tokens to lowercase to map differently capitalized tokens to a single token.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' For example, without lowercasing, the same function SUM and sum will map to different tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Example 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' A formula =SUMIF(B1:B5, "Not available", A1:A5) 1These workbooks were collected as part of a large Excel corpus planned for public release by a separate group of authors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' is tokenized as = sumif ( b 1 : b 5 , ␣ " not ␣ available " , ␣ a 1 : a 5 ) with space tokens denoted by ␣.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='4 Pretraining Objectives for Training In this section, we describe the combination of generic and Excel-specific pretraining objectives, as summarized in Fig- ure 3, that we use to train FLAME.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Masking objectives We use two forms of masking to pre-train FLAME, an Excel- specific variant of masked span prediction (MSP), and a generic tail masking objective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Language-aware masked span prediction In contrast to traditional MSP, spans must respect Excel lexer bounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' For example, when an Excel cell reference BC18 is divided into four tokens B C 1 8, we ensure that either all or none of its constituent tokens is masked.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Consecutive masked tokens are represented with a single token.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Inspired by Mixture- of-Denoisers [Tay et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2022], we mask spans of tokens using combinations of high (35%) and low (15%) masking rates, and big (6 tokens) and small (2 tokens) average span lengths.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Generic tail masking We perform tail masking at the char- acter level and allow partial masks of complete tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We keep the leading {30%,40%,··· ,70%} tokens of the input sequence and append a token.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Noisy Auto-encoding Previous work in natural language processing has used denois- ing auto-encoding during pretraining [Lewis et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2020].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We incorporate two such objectives in FLAME.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Random Noise We introduce generic noise by randomly inserting, deleting, or updating tokens in the input sequence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' The insertion and update operators randomly sample a token from the vocabulary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Excel-specific user-inspired noise We introduce noise op- erators that mirror mistakes that real users might make when writing Excel formulas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' For example, users often write formu- las with the incorrect function arity for in-built functions such as SUMIF.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We implement 17 noise operators (Appendix A) based on a combination of help forum and code analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We randomly choose one of these noise operators when introduc- ing noise into an input sequence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Note that for all pretraining objectives, FLAME needs to generate a complete formula (rather than just mask values).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Combining pretraining objectives Rather than applying all pretraining objectives on every batch and then combining losses, we pick a single objective for each batch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We use the following probabilities {MSP: 50%, tail masking: 20%, user-inspired denoising: 20%, random denoising: 5%} for choosing the objective to be applied, and with a 5% probability, we leave the sequence intact.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' 4 Experimental Setup We now describe our experimental setup.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We start with the baseline models we compare against (§4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='1), the training setup (§4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='2), and then detail each downstream task in our evaluation, along with their corresponding datasets (§4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='1 Baselines and Configurations We compare FLAME to the following much larger language models, summarized in Table 1: CodeT5: a 220 million parameter T5-based encoder- decoder model trained on both natural language and code.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We present fine-tuned results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Codex-Cushman: a 12 billion parameter autoregressive, decoder-only, GPT-3-based model trained on both natural language and code.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We present both zeroshot and fine- tuned results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Codex-Davinci: a 175 billion parameter autoregressive, decoder-only, GPT-3-based model trained on both natural language and code.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We present zeroshot and few-shot results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We do not have resources to fine-tune Davinci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' For Codex-based baselines, we use nucleus sampling [Holtz- man et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2019] (temperature=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='7) and sample 50 sequences per task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We sort these sequences based on their average token log probabilities following [Joshi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2022].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We detail the prompts in Appendix B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' For CodeT5, we use beam search with a beam width of 50, and we consider the top 50 sequences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='2 Training Details We pretrain FLAME for 10 epochs and finetune CodeT5 and FLAME on a cluster with 16 AMD MI200s, 96 cores and 900 GB RAM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We finetune FLAME for 2 epochs for each down- stream task and finetune CodeT5 for 25 epochs with a patience of 5 epochs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We carry out all Codex experiments on a cluster with 8 V100s, 40 cores, and 672 GB RAM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' For Codex fine- tuning we use low-rank adaptation (LoRA) [Hu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2021].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Refer to Appendix C for more details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='3 Downstream Tasks We consider three different downstream tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' System Architecture Number of parameters Codex-Cushman Decoder 12 billion Codex-Davinci Decoder 175 billion CodeT5 (base) Encoder-Decoder 220 million FLAME (ours) Encoder-Decoder 60 million Table 1: Architecture and size comparison of baselines and FLAME Last-mile Repair Last-mile repair refers to repairs that require few edits and fix syntax and simple semantic errors, such as wrong function call arity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' In this setting, FLAME is given the buggy formula as the input sequence, and the task is to generate the user’s intended (and syntactically correct) formula without any last-mile error.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Example 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' The user has used the wrong call arity for ISERROR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Red highlights the error in the buggy formula, and green denotes the required edit to match the groundtruth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Buggy Formula: =IF(ISERROR(G6 *1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='2, "" ) ) Groundtruth Formula: =IF(ISERROR(G6 *1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='2 ) , "") Fine Tuning We create a finetuning dataset for all systems by taking 200K well-formed formulas from Excel help forums.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We then randomly apply our user-inspired noise operators to generate broken versions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Evaluation Metric We compute an exact match with re- spect to the ground truth repair.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We consider the top 1 and top 5 candidates produced by each system per formula and report the exact match fraction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Benchmarks We evaluate all systems on two benchmarks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We use the collection of 273 labeled Excel formulas used in recent last-mile repair literature [Joshi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2022].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' The authors sourced these formulas from Excel help forums.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We refer to this benchmark set as Forum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We also reserve a split of randomly sampled 500 formulas derived using the same procedure as our finetuning dataset to create a Test benchmark set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Autocompletion Code completion is a popular task for language models trained on code, both due to its autoregressive nature and the practical value of code completion as a feature in developers’ workflows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' In this setting, FLAME is given a formula prefix, and the task is to generate the complete formula.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Example 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Formula Autocompletion Formula Prefix: =B2<=EDATE( Formula Completion: =B2<=EDATE(TODAY(),-33) Fine Tuning We curated a finetuning dataset for autocom- pletion by splitting 189k formulas and sampling a prefix length of {0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='2,··· ,0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='7,0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='8} fraction of tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Evaluation Metric When completing formulas, some parts can be hard to predict due to lack of context [Guo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2021], such as cell references, sheet names, string literals, and numerics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Therefore, in addition to exatch match, we also consider sketch match for autocompletion with respect to the ground truth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Precisely, for sketch match, we use the same sketch procedure described in §3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' This uses the Excel lexer Model Last Mile Repair Syntax Reconstuction Forum Test Forum Test T@1 T@5 T@1 T@5 T@1 T@5 T@1 T@5 Cushman 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='79 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='88 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='87 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='93 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='80 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='84 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='91 Davinci (FS) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='76 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='89 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='54 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='77 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='62 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='77 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='61 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='73 CodeT5 (220M) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='84 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='84 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='90 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='84 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='82 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='89 CodeT5 (60M) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='72 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='83 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='82 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='89 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='65 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='81 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='83 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='89 FLAME 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='76 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='89 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='83 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='91 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='89 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='84 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='89 Table 2: Fine-tuned performance for Last Mile Repair and Syntax reconstruction tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Codex-Davinci uses few-shots and is denoted by an FS suffix).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' FLAME outperforms larger models at last-mile repair in the Forum benchmark at top-5, and comes in second at top-1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' In syntax reconstruction, FLAME outperforms all models at both cutoffs in the Forum benchmark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Bold denotes best performing model and Underline represents second best.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Models Exact Match Sketch Match 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='90 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='99 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='90 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='99 Cushman 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='27 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='61 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='86 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='26 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='47 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='71 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='86 Davinci 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='03 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='31 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='64 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='85 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='53 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='76 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='85 CodeT5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='27 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='21 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='03 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='09 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='39 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='22 FLAME 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='34 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='93 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='24 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='55 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='84 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='94 Table 3: Zeroshot autcompletion performance of FLAME, Codex-Cushman and Codex-Davinci, and fine-tuned CodeT5 (as denoted by FT suffix).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Given {0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='25,0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='50,0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='75,0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='90,0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='99} fraction of formula prefix, we report the proportion of formulas completed in the top 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We observe that FLAME outperforms all the large language models in the exact match setting and most (3/5) of the sketch match settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Bold denotes best performing model and Underline represents second best.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' to tokenize a formula and preserves built-in function names but replaces all other tokens with their token type.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We then compare the sketches of the formulas for a match.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' For instance, in Example 3, predicting the numeric −33 is highly contextual, so in a sketch we match with its token type, Numeric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Benchmarks We evaluate autocompletion on a single bench- mark, consisting of the 273 ground truth formulas from the Forum last-mile repair benchmark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' For each formula, given exact match or sketch match metric, we predict completions at 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='25, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='5, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='75, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='90 and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='99 fractions of formula prefix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Syntax Reconstruction We introduce a new task that we term syntax reconstruction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' The input to this task consists of Excel formulas which we have processed to remove any delimiters, resulting in a flat stream of lexer tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Excel delimiters are defined to be the following set of tokens: {( ) !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' , ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' { } [ ] .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' }.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' The model is then tasked with generating the original formula with appropriate delimiters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Example 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Syntax Reconstruction given the excel tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Tokens: MAX 0 MOD C10 - B10 1 - D10 Reconstruction: MAX(0,MOD(C10-B10,1)-D10) Since, by definition, syntax reconstruction cannot introduce tokens into the output that are not delimiters or not in the orig- inal input token stream, FLAME employs constrained decoding to greedily remove invalid candidates from the search space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Our tokenizer design, particularly splitting on punctuation, makes this decoding strategy easier to implement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Fine Tuning We curate a finetuning dataset by sampling 200k formulas from the publicly available Excel corpus that we used for FLAME’s pretraining.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We keep the subset that con- tains at least one delimiter (139k) and remove all delimiters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Evaluation Metric We compute an exact match with re- spect to the ground truth and consider the top 1 and top 5 candidates produced by each system per formula.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Benchmarks We derive a benchmark set from the last- mile repair benchmarks by removing the delimiters for every groundtruth formula.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We refer to this benchmark as Forum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Finally, we also consider a Test split that reflects the same preparation as the fine tuning dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' 5 Evaluation We explore the following research questions in our evaluation: RQ1: How does FLAME perform on formula intelligence tasks compared to substantially larger language models?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' RQ2: How do pretraining design decisions such as data curation, model size, pretraining objectives, and tokenizer affect FLAME’s downstream performance?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' RQ3: How do various decoding strategies affect different downstream-task performances for FLAME?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='1 RQ1: Larger Language Models We now compare FLAME to substantially larger language mod- els on our three formula intelligence tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Last Mile Repair and Syntax Reconstruction We finetune FLAME, CodeT5, and Codex-Cushman for last- mile repair and syntax reconstruction, and use few-shot prompts with three shots for Codex Davinci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Although one of Model Last Mile Repair Syntax Reconstuction Forum Test Forum Test T@1 T@5 T@1 T@5 T@1 T@5 T@1 T@5 Cushman 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='55 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='85 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='41 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='63 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='27 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='53 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='23 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='46 Davinci 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='60 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='82 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='51 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='51 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='65 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='31 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='45 FLAME 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='71 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='88 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='74 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='85 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='41 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='53 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='58 Table 4: Zeroshot last-mile repair and syntax reconstruction performance of FLAME and Codex models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' FLAME outperforms all the larger models in Last Mile Repair task and solves more benchmarks than Codex-Cushman for the Syntax Reconstruction task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Bold denotes best performing model and Underline represents second best.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Model Zeroshot Finetuned LMR SR AC (EM) AC (SM) LMR SR Forum Test Forum Test 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='90 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='90 Forum Test Forum Test FLAME (60M) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='71 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='74 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='41 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='34 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='55 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='84 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='76 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='83 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='84 FLAME (16M) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='68 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='64 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='23 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='42 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='24 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='59 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='54 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='76 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='73 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='78 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='73 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='78 Global Deduplication 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='57 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='56 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='16 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='45 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='41 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='59 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='68 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='76 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='73 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='81 T5 (Generic objectives and tokenizer) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='11 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='07 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='22 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='37 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='62 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='82 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='49 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='74 Table 5: We compare multiple pretraining design decisions: model size, pretraining data curation, domain-specific pretraining objectives and tokenizer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We consider at top-1 for Last-Mile Repair (LMR) and Syntax Reconstruction (SR) and top-5 for Autocompletion (AC) with Exact Match (EM) and Sketch Match (SM).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' For details refer to Appendix D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Smaller model performs worse across the board.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Curating data with global deduplication reduces performance by up to 30 points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Removing domain-specific objectives and tokenizer impacts performance most.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' our pretraining objectives closely resembles last-mile repair (noisy auto-encoding) we find that finetuning FLAME helps direct it towards a particular task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We summarize the results in Table 2 and observe that on the Forum last-mile repair benchmark FLAME outperforms all models at top-5, and is second best to Codex-Cushman at top- 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' In the Test benchmark, we find that FLAME is second-best to Codex-Cushman at top-5 and is close to CodeT5’s second- best performance at top-1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' In the Test benchmark, Davinci’s performance is substantially worse than the fine-tuned models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' On further analysis, we found that all models solve 73% of the Forum benchmark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' FLAME solves 4% of the benchmarks that no other model solves and fails on 1% of the benchmarks that all other models fix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' FLAME also generates syntactically correct formulas for 98% of the benchmarks in top 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' In Figure 4, we show examples where FLAME gets the correct fix, and other models do not, and vice versa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We note that in some cases, FLAME’s fixes appear to be more natural, but fail to match the user’s ground truth repair.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' For syntax reconstruction Forum, we find that FLAME outper- forms other models across the top-1 and top-5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Interestingly, CodeT5 also solves more syntax reconstruction tasks than both Codex models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We hypothesize that since syntax recon- struction is a new task, as compared to the more traditional repair problem, after fine-tuning, encoder-decoder models per- form better than decoder-only models, as shown by [Tay et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2022].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' In Test, we find that FLAME performs similar to Codex-Cushman (same at top-1 and -2 points lower at top-5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We find that 54% of the Forum syntax reconstruction bench- marks are solved by all the models, 1% is solved only by FLAME, and there are no benchmarks that all other models solve but FLAME doesn’t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=" We attribute this performance to our =IF('Jan 13'!" metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='B2="", \'Feb 13\'!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='B2="", \'Mar 13\'!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='B2="", \'Apr 13\'!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='B2="", yes, no) =IF(AND(\'Jan 13\'!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='B2="", \'Feb 13\'!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='B2="", \'Mar 13\'!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='B2="", \'Apr 13\'!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='B2=""), "yes", "no") Buggy Formula Ground Truth Fix FLAME Codex-Cushman Codex-Davinci CodeT5 X X X =VLOOKUP($Z25,$X$25:$Y:31,2,FALSE) =VLOOKUP($Z25,$X$25:$Y31,2,FALSE) Buggy Formula Ground Truth Fix FLAME Codex-Cushman Codex-Davinci CodeT5 X =VLOOKUP($Z25,$X$25:$Y$31,2,FALSE) FLAME Example 1 Example 2 Figure 4: Repair tasks with diverging performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' In Example 1, the user did not use the AND function and missed double quotes around string literals yes and no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' FLAME fixes this (in top-5), while other models fail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' In Example 2 FLAME’s top candidate is syntactically valid but does not match the user’s fix, while other models’ predictions do.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' pretraining design choices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' First, FLAME learns to generate syntactically correct code as a result of its noisy auto-encoding pretraining objective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Second, FLAME learns the natural distri- bution of formulas by generating complete sequences during pretraining, rather than just mask values and sentinel tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Zeroshot Performance FLAME’s pretraining objectives al- low us to consider zeroshot performance for both last-mile repair and syntax reconstruction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' In Table 4, we observe that FLAME outperforms Codex models for last-mile repair across all benchmarks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We attribute this to the closeness of our noisy auto-encoding pretraining objectives and the last-mile repair task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We find that in the syntax reconstruction task, FLAME out- performs Codex-Cushman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We believe this is because syntax reconstruction can be considered an extreme case of repair.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Formula Autocompletion The autoregressive nature of Codex models and FLAME’s pre- training objectives allows us to evaluate their zeroshot per- formance2 for formula auto-completion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Note that we fine- tune CodeT5 for this task as it is pretrained on smaller span lengths (1 to 5 tokens) and generates special mask tokens (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', ) in a zeroshot setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We compute exact match and sketch match metrics with top-5 results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' In Table 3, we observe that FLAME performs better than all the larger models on the exact match metric and 3 out of 5 pre- fix lengths for sketch match.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We note that Codex-Cushman and Codex-Davinci fail to complete 14% and 15% of the bench- marks with 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='99 fraction of the prefix, respectively, whereas FLAME fails to complete 6% of the benchmarks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We observe significantly lower performance by CodeT5, likely due to the lack of longer masks spans during pretraining.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Surpris- ingly, Codex-Davinci performs slightly worse than the smaller Codex-Cushman for 3 out of 5 prefix lengths.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Inspection of completions shows that Codex-Davinci tends to generate more tokens than required when completing these benchmark tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We also observe cases where models succeed with a shorter prefix but fail given a longer prefix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='2 RQ2: Pretraining design decisions We investigate FLAME’s data curation, model size, the use of domain-specific pretraining objectives, and domain-specific tokenizer, and present results in Table 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Training data curation Previous work [Lee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Kandpal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2022] have shown that deduplication can improve the performance of language models and reduce the memorization of training data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Therefore, we curate a pretraining dataset by performing workbook-level sketch-based formula deduplication.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Alterna- tively, one might consider performing global (pooled across all workbooks) sketch-based deduplication.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' This alternative results in a pretraining set of 591K formulas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Table 5 shows that training on this smaller corpus results in a lower perfor- mance model .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We find that FLAME’s zeroshot performance falls by 14 points and finetuned performance falls by 18 points for last-mile repair in Forum benchmarks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Model size We trained two variants of FLAME with 16M and 60M parame- ters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Table 5 compares FLAME-16M and FLAME-60M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We find that performance declines slightly across tasks/benchmarks when we reduce the model size to 16M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' However, note that FLAME-16M can still outperform larger models such as Codex in 5 out of 10 zeroshot and finetuned settings, highlighting the efficacy of our design choices for FLAME.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Pretraining objectives and Tokenizer To evaluate the effectiveness of our domain-specific pretrain- ing objectives and tokenizer, we pretrained a 60M parameters T5 model with generic pertaining objectives and tokenizer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Specifically, this model uses tail-masking, masked span pre- diction without accounting for lexer token boundaries, and 2We finetuned Codex-Cushman and FLAME but observed worse performance, possibly from over-fitting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' MAX C2 Sum C3:C4 SUM C5:C7 1 MAX(C2, Sum(C3:C4),SUM(C5:C7),1) MAX(C2,Sum!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='C3:C4,SUM(C5:C7),1) Tokens Formula T5 (Generic Pretraining and Tokenizer) Figure 5: Failing case of syntax reconstruction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Due to the different capitalization of Sum and SUM, the model treats them as different tokens, converting them to an identifier and a function, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' random denoising objectives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Additionally, it uses the CodeT5 tokenizer trained on our pretraining data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Table 5 shows that this variant performs worse across all tasks and benchmarks, both in a zeroshot and finetuned setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We attribute the huge drop, up to 62 points, in last-mile repair tasks in zeroshot to our user-inspired denoising pretraining objective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Moreover, we hypothesize that FLAME’s good syntax reconstruction per- formance can be attributed to the domain-specific tokenizer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Figure 5 illustrates how the generic tokenizer treats tokens with different capitalizations, resulting in incorrect generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='3 RQ3: Decoding strategy In Table 6, we evaluate FLAME using four different decoding strategies, Beam Search, Group Beam Search [Vijayakumar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2016], Nucleus Sampling [Holtzman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2019] and Top K Sampling [Fan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2018].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We find FLAME to perform bet- ter with group beam search decoding (group size of 2) for all the formula intelligence tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' However, for autocompletion with sketch match, nucleus sampling showed superior perfor- mance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We believe this is because autocompletion requires more diverse results, particularly at shorter prefixes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Refer to Appendix E for autocompletion table.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Decoding Method LMR (Forum) SR (Forum) T@1 T@5 T@1 T@5 Beam Search 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='76 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='88 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='89 Group Beam 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='76 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='89 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='89 Nucleus Sampling 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='72 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='85 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='84 Top K 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='67 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='86 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='67 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='84 Table 6: Performance by decoder strategy for last mile repair (LMR) and syntax reconstruction (SR).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Beam and Grouped Beam Search have similar performance, and outperform Nucleus, Top K Sampling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' 6 Conclusions and Future Work We present FLAME, a small (60M parameter) language model for spreadsheet formulas, which captures domain-specific properties in its data curation, tokenization, and pretraining objectives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We implemented FLAME for Excel formulas and evaluate on three downstream tasks: last-mile repair, autocom- pletion, and a novel task that we term syntax reconstruction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We compare with the much larger models CodeT5, Codex- Cushman, and Codex-Davinci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' When fine-tuned, FLAME can achieve top performance in 6 of our 10 experimental settings, despite having two orders of magnitude fewer parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Future work will explore downstream tasks that require additional spreadsheet context (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' tables).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' To tackle such tasks we will explore extending our pretraining objectives to incorporate context and the extent to which FLAME can integrate with existing table encoder models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Acknowledgments We thank Microsoft Research Cambridge for sharing the Ex- cel corpus used for pretraining FLAME.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We thank OCTO at Microsoft (in particular Gopi Kumar and the AMD vTeam) for providing us with compute resources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We also thank the Excel team for their feedback and encouragement in pursuing this work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' References [Bavishi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2022] Rohan Bavishi, Harshit Joshi, Jos´e Cambronero, Anna Fariha, Sumit Gulwani, Vu Le, Ivan Radiˇcek, and Ashish Tiwari.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Neurosymbolic repair for low-code formula languages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' ACM Program.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Lang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 6(OOPSLA2), oct 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' [Berabi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2021] Berkay Berabi, Jingxuan He, Veselin Raychev, and Martin Vechev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Tfix: Learning to fix cod- ing errors with a text-to-text transformer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' In International Conference on Machine Learning, pages 780–791.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' PMLR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' [Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=',' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' 2021a] Mark Chen,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Jerry Tworek,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Heewoo Jun,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Qiming Yuan,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Henrique Ponde de Oliveira Pinto,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Jared Kaplan,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Harri Edwards,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Yuri Burda,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Nicholas Joseph,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Greg Brockman,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Alex Ray,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Raul Puri,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Gretchen Krueger,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Michael Petrov,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Heidy Khlaaf,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Girish Sastry,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Pamela Mishkin,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Brooke Chan,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Scott Gray,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Nick Ryder,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Mikhail Pavlov,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Alethea Power,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Lukasz Kaiser,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Mohammad Bavar- ian,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Clemens Winter,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Philippe Tillet,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Felipe Petroski Such,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Dave Cummings,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Matthias Plappert,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Fotios Chantzis,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Eliz- abeth Barnes,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Ariel Herbert-Voss,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' William Hebgen Guss,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Alex Nichol,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Alex Paino,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Nikolas Tezak,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Jie Tang,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Igor Babuschkin,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Suchir Balaji,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Shantanu Jain,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' William Saun- ders,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Christopher Hesse,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Andrew N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Eval- uating large language models trained on code, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' [Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2021b] Xinyun Chen, Petros Maniatis, Rishabh Singh, Charles Sutton, Hanjun Dai, Max Lin, and Denny Zhou.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Spreadsheetcoder: Formula prediction from semi- structured context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' In International Conference on Machine Learning, pages 1661–1672.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' PMLR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' [Chowdhery et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2022] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Palm: Scaling language modeling with pathways.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' arXiv preprint arXiv:2204.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='02311, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' [Domingo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2018] Miguel Domingo, Mercedes Garcıa- Martınez, Alexandre Helle, Francisco Casacuberta, and Manuel Herranz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' How much does tokenization affect neural machine translation?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' arXiv preprint arXiv:1812.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='08621, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' [Fan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2018] Angela Fan, Mike Lewis, and Yann Dauphin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Hierarchical neural story generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' arXiv preprint arXiv:1805.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='04833, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' [Feng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2017] Yu Feng, Ruben Martins, Yuepeng Wang, Isil Dillig, and Thomas W Reps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Component-based syn- thesis for complex apis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' In Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Lan- guages, pages 599–612, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' [Feng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2020] Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, and Ming Zhou.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Code- BERT: A pre-trained model for programming and natu- ral languages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' In Findings of the Association for Com- putational Linguistics: EMNLP 2020, pages 1536–1547, Online, November 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Association for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' [Fried et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2022] Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Wen-tau Yih, Luke Zettlemoyer, and Mike Lewis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Incoder: A generative model for code infilling and synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' arXiv preprint arXiv:2204.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='05999, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' [GitHub, 2021] GitHub.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' GitHub CoPilot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='com/ features/copilot/, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' [Online;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' accessed 09-January- 2023].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' [Guo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2021] Daya Guo, Alexey Svyatkovskiy, Jian Yin, Nan Duan, Marc Brockschmidt, and Miltiadis Allamanis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Learning to complete code with sketches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' In International Conference on Learning Representations, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' [Gupta et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2017] Rahul Gupta, Soham Pal, Aditya Kanade, and Shirish Shevade.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Deepfix: Fixing common c language errors by deep learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' In Thirty-First AAAI conference on artificial intelligence, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' [Holtzman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2019] Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' The curious case of neural text degeneration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' arXiv preprint arXiv:1904.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='09751, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' [Hu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2021] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Lora: Low-rank adaptation of large language models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' arXiv preprint arXiv:2106.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='09685, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' [Jha et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2010] Susmit Jha, Sumit Gulwani, Sanjit A Se- shia, and Ashish Tiwari.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Oracle-guided component-based program synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' In 2010 ACM/IEEE 32nd International Conference on Software Engineering, volume 1, pages 215– 224.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' IEEE, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' [Joshi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2022] Harshit Joshi, Jos´e Cambronero, Sumit Gulwani, Vu Le, Ivan Radicek, and Gust Verbruggen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Re- pair is nearly generation: Multilingual program repair with llms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' arXiv preprint arXiv:2208.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='11640, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' [Kandpal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2022] Nikhil Kandpal, Eric Wallace, and Colin Raffel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Deduplicating training data mitigates privacy risks in language models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' arXiv preprint arXiv:2202.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='06539, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' [Lee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2021] Katherine Lee, Daphne Ippolito, Andrew Nystrom, Chiyuan Zhang, Douglas Eck, Chris Callison- Burch, and Nicholas Carlini.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Deduplicating training data makes language models better.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' arXiv preprint arXiv:2107.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='06499, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' [Lewis et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2020] Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' In Dan Jurafsky, Joyce Chai, Natalie Schluter, and Joel R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Tetreault, editors, Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 7871–7880.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Associa- tion for Computational Linguistics, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' [Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2022a] Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, R´emi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Competition-level code generation with alphacode.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Science, 378(6624):1092–1097, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' [Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2022b] Zhiyu Li, Shuai Lu, Daya Guo, Nan Duan, Shailesh Jannu, Grant Jenks, Deep Majumder, Jared Green, Alexey Svyatkovskiy, Shengyu Fu, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Automating code review activities by large-scale pre-training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' In Proceedings of the 30th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering, pages 1035–1047, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' [Lu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2021] Shuai Lu, Daya Guo, Shuo Ren, Junjie Huang, Alexey Svyatkovskiy, Ambrosio Blanco, Colin Clement, Dawn Drain, Daxin Jiang, Duyu Tang, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Codexglue: A machine learning benchmark dataset for code understanding and generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' In Thirty-fifth Confer- ence on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' [Nijkamp et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2022] Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Codegen: An open large language model for code with multi-turn program synthesis, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' [OpenAI, 2023] OpenAI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Openai pricing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' https://openai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='com/ api/pricing/, 2023.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' [Online;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' accessed 17-January-2023].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' [Radford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2019] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Language models are unsupervised multitask learners.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Ope- nAI blog, 1(8):9, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' [Raffel et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2020] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Liu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Exploring the lim- its of transfer learning with a unified text-to-text trans- former.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Journal of Machine Learning Research, 21(140):1– 67, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' [Rahmani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2021] Kia Rahmani, Mohammad Raza, Sumit Gulwani, Vu Le, Daniel Morris, Arjun Radhakrishna, Gustavo Soares, and Ashish Tiwari.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Multi-modal program inference: a marriage of pre-trained language models and component-based synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Proceedings of the ACM on Programming Languages, 5(OOPSLA):1–29, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' [Sennrich et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2016] Rico Sennrich, Barry Haddow, and Alexandra Birch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Neural machine translation of rare words with subword units.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715–1725, Berlin, Ger- many, August 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Association for Computational Lin- guistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' [Svyatkovskiy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2020] Alexey Svyatkovskiy, Shao Kun Deng, Shengyu Fu, and Neel Sundaresan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Intellicode compose: Code generation using transformer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' In 28th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineer- ing (ESEC/FSE ’20), May 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' [Tay et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2022] Yi Tay, Mostafa Dehghani, Vinh Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Tran, Xavier Garcia, Jason Wei, Xuezhi Wang, Hyung Won Chung, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Denny Zhou, Neil Houlsby, and Donald Metzler.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Ul2: Unifying language learning paradigms, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' [Vijayakumar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2016] Ashwin K Vijayakumar, Michael Cogswell, Ramprasath R Selvaraju, Qing Sun, Stefan Lee, David Crandall, and Dhruv Batra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Diverse beam search: Decoding diverse solutions from neural sequence models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' arXiv preprint arXiv:1610.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='02424, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' [Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2021] Yue Wang, Weishi Wang, Shafiq Joty, and Steven CH Hoi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Codet5: Identifier-aware unified pre- trained encoder-decoder models for code understanding and generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' arXiv preprint arXiv:2109.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='00859, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' [Xu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2022] Frank F Xu, Uri Alon, Graham Neubig, and Vincent Josua Hellendoorn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' A systematic evaluation of large language models of code.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' In Proceedings of the 6th ACM SIGPLAN International Symposium on Machine Programming, pages 1–10, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' [Yasunaga and Liang, 2020] Michihiro Yasunaga and Percy Liang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Graph-based, self-supervised program repair from diagnostic feedback.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' In International Conference on Ma- chine Learning, pages 10799–10808.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' PMLR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' [Yasunaga and Liang, 2021] Michihiro Yasunaga and Percy Liang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Break-it-fix-it: Unsupervised learning for program repair.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' In International Conference on Machine Learning, pages 11941–11952.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' PMLR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' A User noise operators We implement the following noise operators: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Wrong Range: we replace the range operator :, with one of the following symbols: {;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' , space "}, or we delete the range operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Malformed Range: A range consists of 4 el- ements: col1, row1, col2, row2 written as col1row1:col2row2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We randomly delete one of these elements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' For eg: col1:col2row2 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Space between Function and Arguments in a Call: We introduce a space between the function name and the opening parentheses for built-in functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' For exam- ple: SUM(A1:A10) converts to SUM (A1:A10) 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Change number of arguments: We change the num- ber of arguments for functions with fixed function arity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' For example, IF has a minimum arity of 2 and maxi- mum arity of 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Specifically, if a function contains ar- guments equal to its minimum function arity, then we randomly delete one argument.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Whereas, if the func- tion’s max arity is equal to the number of arguments, then we randomly copy one of the existing arguments and pass it as an additional argument to the function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' For example, IF(A2>10, True, False) can become IF(A2>10, True, False, False) 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Swap arguments: If a function takes different types of arguments, then we swap these arguments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' For example: IF(A1>10, 1, 2) can become IF(1, A1>10, 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Space between relational operators: We add space between relational operators, such as < =.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Swap relational operators: We swap relational opera- tors, such as <= turns to =< 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Inequality noise operator: In Excel <> is the inequality operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We replace this with the incorrect !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='= or =!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='. 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Invalid Equality: We also corrupt the equality operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' The equality operator in Excel is =, we replace it with == or ===.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Malformed Sheet Name: Multi-word sheet names in Excel need to be enclosed within single quotes (’’).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We randomly choose to either delete the single quotes or replace them with double quotes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' For example, ’Sheet 1’!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='A10 can become "Sheet 1"!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='A10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Remove exclamation Mark: In Excel, sheet names are followed by an exclamation mark to denote sheet reference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We delete this exclamation mark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Malformed Strings: We corrupt strings by either delet- ing the double quotes or replacing them with single quotes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Add Comma and Remove Parentheses: We randomly choose to either insert a comma before a closing parenthe- sis or insert a comma and delete the closing parentheses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Add random operators: We define a set of operators that we randomly insert into the formula at a random position.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' These operators are: {+ - * / ^& < > = .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' ) #} 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Add operator at the end: We randomly add one of the operators mentioned previously at the end of the se- quence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Add Parentheses: We add opening and closing paren- thesis at random places.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Corrupting Unreliable tokens: Following [Bavishi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=', 2022], we randomly add, delete or replace unreliable tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Unreliable tokens are tokens where users often make mistakes, defined to be delimiters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' B Codex Prompts For all our codex experiments, we use the following prompts for zeroshot and finetuning and use a temperature of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='7 B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='1 Repair - Zeroshot and Finetuning ##### Fix bugs in the below code ### Buggy Excel ### Fixed Excel ##### Fix bugs in the below code ### Buggy Excel =SUMIFS( Master!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='$P:$P, Master!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='$F:$F,$A7, Master856!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='$E:212Systems$B7 ) ### Fixed Excel B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='2 Syntax Reconstruction - Zeroshot and Finetuning ### Excel Tokens ### Complete Excel Formula ### Excel Tokens INDEX Table1 SMALL IF Table1 COMPANY_NAME = $E$1 ROW Table1 COMPANY_NAME - 1 ROW 2:2 3 ### Complete Excel Formula B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='3 Autocomplete - Zeroshot ### Excel Formula ### Excel Formula IF(FALSE,NA( Models Exact Match Sketch Match 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='90 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='99 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='90 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='99 FLAME (60M) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='34 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='93 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='24 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='55 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='84 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='94 FLAME (16M) 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='03 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='24 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='59 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='89 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='11 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='54 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='76 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='90 Global Deduplication 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='03 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='45 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='64 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='41 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='59 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='70 T5 (Generic objectives and tokenizer) 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='07 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='07 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='22 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='21 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='09 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='37 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='29 Table 7: Design choice experiments for autocompletion task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We compare multiple pretraining design decisions: model size, pretraining data curation, domain-specific pretraining objectives and tokenizer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We consider top-5 for Autocompletion (AC) with Exact Match (EM) and Sketch Match (SM).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We note that FLAME outperforms all the models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Models Exact Match SketchMatch 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='9 Total 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='9 Total Beam Search 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='33 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='71 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='92 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='54 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='82 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='94 Group Beam Search (groups = 2) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='34 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='93 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='24 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='55 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='84 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='94 Nucleus Sampling 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='26 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='59 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='92 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='14 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='30 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='53 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='74 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='92 TopK Sampling 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='62 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='92 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='30 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='55 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='76 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='92 Table 8: Performance by decoder strategy for Autocompletion (top 5) with Exact Match and Sketch Match.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' Beam Search outperforms all the strategies – Group Beam Search with a group size of 2, Nucleus Sampling, and Top K Sampling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' C Training details We use the following HuggingFace configuration to train FLAME: { "architectures": [ "T5ForConditionalGeneration" ], "d_ff": 1024, "d_kv": 64, "d_model": 512, "decoder_start_token_id": 0, "dropout_rate": 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='1, "bos_token_id": 1, "eos_token_id": 2, "feed_forward_proj": "gated-gelu", "initializer_factor": 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='0, "is_encoder_decoder": true, "layer_norm_epsilon": 1e-06, "model_type": "t5", "num_decoder_layers": 8, "num_heads": 6, "num_layers": 8, "output_past": true, "pad_token_id": 0, "relative_attention_num_buckets": 32, "tie_word_embeddings": false, "vocab_size": 16479 } We use an AdaFactor optimizer, with 1e-4 learning rate, clip factor of 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='0, with scale parameters and relative steps set to false.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' For fine-tuning, we use a weight decay of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content='1 We use linear learning rate schedule with 100 warm-up steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' D Design Decision (Autocompletion) We detail our autocompletion evaluation where we evaluate FLAME against different variations in Table 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We observe that FLAME beats all the different model variants.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' E Decoder Autocompletion In Table 8, we detail autocompletion results for different de- coding strategies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'} +page_content=' We find that Beam Search beats other decod- ing methods in 7 out of 10 prefix lengths, and Top K Sampling beats others in Sketch Match for smaller fractions of prefixes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FST4oBgHgl3EQfWDi_/content/2301.13779v1.pdf'}