File size: 2,552 Bytes
ea5d072 df7ef85 da6e350 df7ef85 8505ac2 ea5d072 da6e350 ea5d072 3c07f38 da6e350 3c07f38 ea5d072 aebfdb5 a90f026 aebfdb5 2c45bf3 e5e52db 2c45bf3 e5e52db 2c45bf3 aebfdb5 8505ac2 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
paperswithcode_id: canitedit
pretty_name: CanItEdit
tags:
- code-generation
- code
dataset_info:
features:
- name: id
dtype: int64
- name: name
dtype: string
- name: full_name
dtype: string
- name: before
dtype: string
- name: after
dtype: string
- name: tests
dtype: string
- name: instruction_descriptive
dtype: string
- name: instruction_lazy
dtype: string
- name: taxonomy
struct:
- name: change_kind
dtype: string
- name: libraries
sequence: string
- name: topic
dtype: string
splits:
- name: test
num_bytes: 564910
num_examples: 105
download_size: 250477
dataset_size: 564910
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
# Can It Edit? Evaluating the Ability of Large Language Models to Follow Code Editing Instructions
CanItEdit is a benchmark for evaluating LLMs on instructional code editing, the task of updating a program given a natural language instruction. The benchmark contains 105 hand-crafted Python programs with before and after code blocks, two types of natural language instructions (descriptive and lazy), and a hidden test suite.
The dataset’s dual natural language instructions test model efficiency in two scenarios:
1) Descriptive: Detailed instructions replicate situations where users provide specific specifications or
another model outlines a plan, similar to Reflexion prompting,
2) Lazy: Informal instructions resemble typical user queries
for LLMs in code generation.
For more information and results see [our paper](https://arxiv.org/abs/2312.12450).
## Citation
If you use our work, please cite our paper as such:
```
@inproceedings{cassano2023edit,
title={{Can It Edit? Evaluating the Ability of Large Language Models to Follow Code Editing Instructions}},
author={Federico Cassano and Luisa Li and Akul Sethi and Noah Shinn and Abby Brennan-Jones and Anton Lozhkov and Carolyn Jane Anderson and Arjun Guha},
booktitle={The First International Workshop on Large Language Model for Code},
year={2024},
url={https://arxiv.org/abs/2312.12450}
}
```
## How To Evaluate
All the code for evaluating the benchmark can be found in our [GitHub repository](https://github.com/nuprl/CanItEdit). |